00:00:00.001 Started by upstream project "autotest-per-patch" build number 132833 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:10.270 The recommended git tool is: git 00:00:10.270 using credential 00000000-0000-0000-0000-000000000002 00:00:10.272 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:10.287 Fetching changes from the remote Git repository 00:00:10.289 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.301 Using shallow fetch with depth 1 00:00:10.301 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:10.301 > git --version # timeout=10 00:00:10.312 > git --version # 'git version 2.39.2' 00:00:10.312 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.324 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.324 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.360 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.371 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.382 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.382 > git config core.sparsecheckout # timeout=10 00:00:13.394 > git read-tree -mu HEAD # timeout=10 00:00:13.409 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.426 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.426 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.525 [Pipeline] Start of Pipeline 00:00:13.537 [Pipeline] library 00:00:13.539 Loading library shm_lib@master 00:00:13.539 Library shm_lib@master is cached. Copying from home. 00:00:13.554 [Pipeline] node 00:00:13.566 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:13.567 [Pipeline] { 00:00:13.577 [Pipeline] catchError 00:00:13.578 [Pipeline] { 00:00:13.591 [Pipeline] wrap 00:00:13.599 [Pipeline] { 00:00:13.607 [Pipeline] stage 00:00:13.609 [Pipeline] { (Prologue) 00:00:13.626 [Pipeline] echo 00:00:13.628 Node: VM-host-WFP1 00:00:13.634 [Pipeline] cleanWs 00:00:13.646 [WS-CLEANUP] Deleting project workspace... 00:00:13.646 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.653 [WS-CLEANUP] done 00:00:13.863 [Pipeline] setCustomBuildProperty 00:00:13.950 [Pipeline] httpRequest 00:00:14.743 [Pipeline] echo 00:00:14.745 Sorcerer 10.211.164.112 is alive 00:00:14.751 [Pipeline] retry 00:00:14.753 [Pipeline] { 00:00:14.765 [Pipeline] httpRequest 00:00:14.769 HttpMethod: GET 00:00:14.770 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.770 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.796 Response Code: HTTP/1.1 200 OK 00:00:14.797 Success: Status code 200 is in the accepted range: 200,404 00:00:14.797 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.555 [Pipeline] } 00:00:34.573 [Pipeline] // retry 00:00:34.581 [Pipeline] sh 00:00:34.863 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.879 [Pipeline] httpRequest 00:00:35.918 [Pipeline] echo 00:00:35.920 Sorcerer 10.211.164.112 is alive 00:00:35.930 [Pipeline] retry 00:00:35.932 [Pipeline] { 00:00:35.947 [Pipeline] httpRequest 00:00:35.953 HttpMethod: GET 00:00:35.954 URL: http://10.211.164.112/packages/spdk_4cd130da1ccf25d41410527faa4734800d4e2eda.tar.gz 00:00:35.954 Sending request to url: http://10.211.164.112/packages/spdk_4cd130da1ccf25d41410527faa4734800d4e2eda.tar.gz 00:00:35.960 Response Code: HTTP/1.1 200 OK 00:00:35.961 Success: Status code 200 is in the accepted range: 200,404 00:00:35.961 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_4cd130da1ccf25d41410527faa4734800d4e2eda.tar.gz 00:02:16.190 [Pipeline] } 00:02:16.207 [Pipeline] // retry 00:02:16.214 [Pipeline] sh 00:02:16.558 + tar --no-same-owner -xf spdk_4cd130da1ccf25d41410527faa4734800d4e2eda.tar.gz 00:02:19.105 [Pipeline] sh 00:02:19.385 + git -C spdk log --oneline -n5 00:02:19.385 4cd130da1 test/check_so_deps: use VERSION to look for prior tags 00:02:19.385 e576aacaf build: use VERSION file for storing version 00:02:19.385 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:19.385 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:19.385 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:19.401 [Pipeline] writeFile 00:02:19.414 [Pipeline] sh 00:02:19.696 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:19.707 [Pipeline] sh 00:02:19.987 + cat autorun-spdk.conf 00:02:19.987 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.987 SPDK_TEST_NVME=1 00:02:19.987 SPDK_TEST_FTL=1 00:02:19.987 SPDK_TEST_ISAL=1 00:02:19.987 SPDK_RUN_ASAN=1 00:02:19.987 SPDK_RUN_UBSAN=1 00:02:19.987 SPDK_TEST_XNVME=1 00:02:19.987 SPDK_TEST_NVME_FDP=1 00:02:19.987 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.994 RUN_NIGHTLY=0 00:02:19.996 [Pipeline] } 00:02:20.010 [Pipeline] // stage 00:02:20.024 [Pipeline] stage 00:02:20.026 [Pipeline] { (Run VM) 00:02:20.038 [Pipeline] sh 00:02:20.322 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:20.322 + echo 'Start stage prepare_nvme.sh' 00:02:20.322 Start stage prepare_nvme.sh 00:02:20.322 + [[ -n 7 ]] 00:02:20.322 + disk_prefix=ex7 00:02:20.322 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:20.322 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:20.322 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:20.322 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.322 ++ SPDK_TEST_NVME=1 00:02:20.322 ++ SPDK_TEST_FTL=1 00:02:20.322 ++ SPDK_TEST_ISAL=1 00:02:20.322 ++ SPDK_RUN_ASAN=1 00:02:20.322 ++ SPDK_RUN_UBSAN=1 00:02:20.322 ++ SPDK_TEST_XNVME=1 00:02:20.322 ++ SPDK_TEST_NVME_FDP=1 00:02:20.322 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.322 ++ RUN_NIGHTLY=0 00:02:20.322 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:20.322 + nvme_files=() 00:02:20.322 + declare -A nvme_files 00:02:20.322 + backend_dir=/var/lib/libvirt/images/backends 00:02:20.322 + nvme_files['nvme.img']=5G 00:02:20.322 + nvme_files['nvme-cmb.img']=5G 00:02:20.322 + nvme_files['nvme-multi0.img']=4G 00:02:20.322 + nvme_files['nvme-multi1.img']=4G 00:02:20.322 + nvme_files['nvme-multi2.img']=4G 00:02:20.322 + nvme_files['nvme-openstack.img']=8G 00:02:20.322 + nvme_files['nvme-zns.img']=5G 00:02:20.322 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:20.322 + (( SPDK_TEST_FTL == 1 )) 00:02:20.322 + nvme_files["nvme-ftl.img"]=6G 00:02:20.322 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:20.322 + nvme_files["nvme-fdp.img"]=1G 00:02:20.322 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:20.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:02:20.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:20.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:20.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:20.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:20.322 + for nvme in "${!nvme_files[@]}" 00:02:20.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:20.581 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:20.581 + for nvme in "${!nvme_files[@]}" 00:02:20.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:20.581 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:20.581 + for nvme in "${!nvme_files[@]}" 00:02:20.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:02:20.581 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:20.581 + for nvme in "${!nvme_files[@]}" 00:02:20.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:20.581 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:20.581 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:20.581 + echo 'End stage prepare_nvme.sh' 00:02:20.581 End stage prepare_nvme.sh 00:02:20.592 [Pipeline] sh 00:02:20.871 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:20.872 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:20.872 00:02:20.872 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:20.872 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:20.872 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:20.872 HELP=0 00:02:20.872 DRY_RUN=0 00:02:20.872 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:02:20.872 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:20.872 NVME_AUTO_CREATE=0 00:02:20.872 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:02:20.872 NVME_CMB=,,,, 00:02:20.872 NVME_PMR=,,,, 00:02:20.872 NVME_ZNS=,,,, 00:02:20.872 NVME_MS=true,,,, 00:02:20.872 NVME_FDP=,,,on, 00:02:20.872 SPDK_VAGRANT_DISTRO=fedora39 00:02:20.872 SPDK_VAGRANT_VMCPU=10 00:02:20.872 SPDK_VAGRANT_VMRAM=12288 00:02:20.872 SPDK_VAGRANT_PROVIDER=libvirt 00:02:20.872 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:20.872 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:20.872 SPDK_OPENSTACK_NETWORK=0 00:02:20.872 VAGRANT_PACKAGE_BOX=0 00:02:20.872 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:20.872 FORCE_DISTRO=true 00:02:20.872 VAGRANT_BOX_VERSION= 00:02:20.872 EXTRA_VAGRANTFILES= 00:02:20.872 NIC_MODEL=e1000 00:02:20.872 00:02:20.872 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:20.872 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:23.401 Bringing machine 'default' up with 'libvirt' provider... 00:02:24.336 ==> default: Creating image (snapshot of base box volume). 00:02:24.595 ==> default: Creating domain with the following settings... 00:02:24.595 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733839668_94e957bdb66574b5fec3 00:02:24.595 ==> default: -- Domain type: kvm 00:02:24.595 ==> default: -- Cpus: 10 00:02:24.595 ==> default: -- Feature: acpi 00:02:24.595 ==> default: -- Feature: apic 00:02:24.595 ==> default: -- Feature: pae 00:02:24.595 ==> default: -- Memory: 12288M 00:02:24.595 ==> default: -- Memory Backing: hugepages: 00:02:24.595 ==> default: -- Management MAC: 00:02:24.595 ==> default: -- Loader: 00:02:24.595 ==> default: -- Nvram: 00:02:24.595 ==> default: -- Base box: spdk/fedora39 00:02:24.595 ==> default: -- Storage pool: default 00:02:24.595 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733839668_94e957bdb66574b5fec3.img (20G) 00:02:24.595 ==> default: -- Volume Cache: default 00:02:24.595 ==> default: -- Kernel: 00:02:24.595 ==> default: -- Initrd: 00:02:24.595 ==> default: -- Graphics Type: vnc 00:02:24.595 ==> default: -- Graphics Port: -1 00:02:24.595 ==> default: -- Graphics IP: 127.0.0.1 00:02:24.595 ==> default: -- Graphics Password: Not defined 00:02:24.595 ==> default: -- Video Type: cirrus 00:02:24.595 ==> default: -- Video VRAM: 9216 00:02:24.595 ==> default: -- Sound Type: 00:02:24.595 ==> default: -- Keymap: en-us 00:02:24.595 ==> default: -- TPM Path: 00:02:24.595 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:24.595 ==> default: -- Command line args: 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:24.595 ==> default: -> value=-drive, 00:02:24.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:24.595 ==> default: -> value=-drive, 00:02:24.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:24.595 ==> default: -> value=-drive, 00:02:24.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.595 ==> default: -> value=-drive, 00:02:24.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.595 ==> default: -> value=-drive, 00:02:24.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.595 ==> default: -> value=-device, 00:02:24.595 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:24.595 ==> default: -> value=-device, 00:02:24.596 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:24.596 ==> default: -> value=-drive, 00:02:24.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:24.596 ==> default: -> value=-device, 00:02:24.596 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:24.854 ==> default: Creating shared folders metadata... 00:02:24.854 ==> default: Starting domain. 00:02:26.753 ==> default: Waiting for domain to get an IP address... 00:02:44.832 ==> default: Waiting for SSH to become available... 00:02:44.832 ==> default: Configuring and enabling network interfaces... 00:02:49.055 default: SSH address: 192.168.121.74:22 00:02:49.055 default: SSH username: vagrant 00:02:49.055 default: SSH auth method: private key 00:02:51.590 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:01.597 ==> default: Mounting SSHFS shared folder... 00:03:02.536 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:02.536 ==> default: Checking Mount.. 00:03:04.442 ==> default: Folder Successfully Mounted! 00:03:04.442 ==> default: Running provisioner: file... 00:03:05.380 default: ~/.gitconfig => .gitconfig 00:03:05.949 00:03:05.949 SUCCESS! 00:03:05.949 00:03:05.949 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:05.949 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:05.949 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:05.949 00:03:05.958 [Pipeline] } 00:03:05.973 [Pipeline] // stage 00:03:05.982 [Pipeline] dir 00:03:05.982 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:05.984 [Pipeline] { 00:03:05.996 [Pipeline] catchError 00:03:05.998 [Pipeline] { 00:03:06.011 [Pipeline] sh 00:03:06.293 + vagrant ssh-config --host vagrant 00:03:06.293 + sed -ne /^Host/,$p 00:03:06.293 + tee ssh_conf 00:03:08.830 Host vagrant 00:03:08.830 HostName 192.168.121.74 00:03:08.830 User vagrant 00:03:08.830 Port 22 00:03:08.830 UserKnownHostsFile /dev/null 00:03:08.830 StrictHostKeyChecking no 00:03:08.830 PasswordAuthentication no 00:03:08.830 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:08.830 IdentitiesOnly yes 00:03:08.830 LogLevel FATAL 00:03:08.830 ForwardAgent yes 00:03:08.830 ForwardX11 yes 00:03:08.830 00:03:08.843 [Pipeline] withEnv 00:03:08.845 [Pipeline] { 00:03:08.858 [Pipeline] sh 00:03:09.142 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:09.142 source /etc/os-release 00:03:09.142 [[ -e /image.version ]] && img=$(< /image.version) 00:03:09.142 # Minimal, systemd-like check. 00:03:09.142 if [[ -e /.dockerenv ]]; then 00:03:09.142 # Clear garbage from the node's name: 00:03:09.142 # agt-er_autotest_547-896 -> autotest_547-896 00:03:09.142 # $HOSTNAME is the actual container id 00:03:09.142 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:09.142 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:09.142 # We can assume this is a mount from a host where container is running, 00:03:09.142 # so fetch its hostname to easily identify the target swarm worker. 00:03:09.142 container="$(< /etc/hostname) ($agent)" 00:03:09.142 else 00:03:09.142 # Fallback 00:03:09.142 container=$agent 00:03:09.142 fi 00:03:09.142 fi 00:03:09.142 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:09.142 00:03:09.413 [Pipeline] } 00:03:09.431 [Pipeline] // withEnv 00:03:09.439 [Pipeline] setCustomBuildProperty 00:03:09.455 [Pipeline] stage 00:03:09.457 [Pipeline] { (Tests) 00:03:09.473 [Pipeline] sh 00:03:09.755 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:10.028 [Pipeline] sh 00:03:10.355 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:10.632 [Pipeline] timeout 00:03:10.632 Timeout set to expire in 50 min 00:03:10.634 [Pipeline] { 00:03:10.646 [Pipeline] sh 00:03:10.926 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:11.494 HEAD is now at 4cd130da1 test/check_so_deps: use VERSION to look for prior tags 00:03:11.506 [Pipeline] sh 00:03:11.821 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:12.094 [Pipeline] sh 00:03:12.375 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:12.650 [Pipeline] sh 00:03:12.932 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:13.190 ++ readlink -f spdk_repo 00:03:13.190 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:13.190 + [[ -n /home/vagrant/spdk_repo ]] 00:03:13.190 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:13.190 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:13.190 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:13.190 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:13.190 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:13.190 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:13.190 + cd /home/vagrant/spdk_repo 00:03:13.191 + source /etc/os-release 00:03:13.191 ++ NAME='Fedora Linux' 00:03:13.191 ++ VERSION='39 (Cloud Edition)' 00:03:13.191 ++ ID=fedora 00:03:13.191 ++ VERSION_ID=39 00:03:13.191 ++ VERSION_CODENAME= 00:03:13.191 ++ PLATFORM_ID=platform:f39 00:03:13.191 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:13.191 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:13.191 ++ LOGO=fedora-logo-icon 00:03:13.191 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:13.191 ++ HOME_URL=https://fedoraproject.org/ 00:03:13.191 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:13.191 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:13.191 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:13.191 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:13.191 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:13.191 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:13.191 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:13.191 ++ SUPPORT_END=2024-11-12 00:03:13.191 ++ VARIANT='Cloud Edition' 00:03:13.191 ++ VARIANT_ID=cloud 00:03:13.191 + uname -a 00:03:13.191 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:13.191 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:13.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:14.018 Hugepages 00:03:14.018 node hugesize free / total 00:03:14.018 node0 1048576kB 0 / 0 00:03:14.018 node0 2048kB 0 / 0 00:03:14.018 00:03:14.018 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:14.018 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:14.018 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:14.018 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:14.018 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:14.018 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:14.018 + rm -f /tmp/spdk-ld-path 00:03:14.018 + source autorun-spdk.conf 00:03:14.018 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:14.018 ++ SPDK_TEST_NVME=1 00:03:14.018 ++ SPDK_TEST_FTL=1 00:03:14.018 ++ SPDK_TEST_ISAL=1 00:03:14.018 ++ SPDK_RUN_ASAN=1 00:03:14.018 ++ SPDK_RUN_UBSAN=1 00:03:14.018 ++ SPDK_TEST_XNVME=1 00:03:14.018 ++ SPDK_TEST_NVME_FDP=1 00:03:14.018 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:14.018 ++ RUN_NIGHTLY=0 00:03:14.018 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:14.018 + [[ -n '' ]] 00:03:14.018 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:14.277 + for M in /var/spdk/build-*-manifest.txt 00:03:14.277 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:14.277 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:14.277 + for M in /var/spdk/build-*-manifest.txt 00:03:14.277 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:14.277 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:14.277 + for M in /var/spdk/build-*-manifest.txt 00:03:14.277 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:14.277 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:14.277 ++ uname 00:03:14.277 + [[ Linux == \L\i\n\u\x ]] 00:03:14.277 + sudo dmesg -T 00:03:14.277 + sudo dmesg --clear 00:03:14.277 + dmesg_pid=5241 00:03:14.277 + sudo dmesg -Tw 00:03:14.277 + [[ Fedora Linux == FreeBSD ]] 00:03:14.277 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:14.277 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:14.277 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:14.277 + [[ -x /usr/src/fio-static/fio ]] 00:03:14.277 + export FIO_BIN=/usr/src/fio-static/fio 00:03:14.277 + FIO_BIN=/usr/src/fio-static/fio 00:03:14.277 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:14.277 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:14.277 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:14.277 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:14.277 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:14.277 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:14.277 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:14.277 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:14.277 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:14.547 14:08:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:14.547 14:08:39 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:14.547 14:08:39 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:14.547 14:08:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:14.547 14:08:39 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:14.547 14:08:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:14.547 14:08:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:14.547 14:08:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:14.547 14:08:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:14.547 14:08:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.547 14:08:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.547 14:08:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.547 14:08:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.547 14:08:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.548 14:08:39 -- paths/export.sh@5 -- $ export PATH 00:03:14.548 14:08:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.548 14:08:39 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:14.548 14:08:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:14.548 14:08:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733839719.XXXXXX 00:03:14.548 14:08:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733839719.SE2MEO 00:03:14.548 14:08:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:14.548 14:08:39 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:14.548 14:08:39 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:14.548 14:08:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:14.548 14:08:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:14.548 14:08:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:14.548 14:08:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:14.548 14:08:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.548 14:08:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:14.548 14:08:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:14.548 14:08:39 -- pm/common@17 -- $ local monitor 00:03:14.548 14:08:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.548 14:08:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.548 14:08:39 -- pm/common@25 -- $ sleep 1 00:03:14.548 14:08:39 -- pm/common@21 -- $ date +%s 00:03:14.548 14:08:39 -- pm/common@21 -- $ date +%s 00:03:14.548 14:08:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733839719 00:03:14.548 14:08:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733839719 00:03:14.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733839719_collect-cpu-load.pm.log 00:03:14.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733839719_collect-vmstat.pm.log 00:03:15.487 14:08:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:15.487 14:08:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:15.487 14:08:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:15.487 14:08:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:15.487 14:08:40 -- spdk/autobuild.sh@16 -- $ date -u 00:03:15.487 Tue Dec 10 02:08:40 PM UTC 2024 00:03:15.487 14:08:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:15.487 v25.01-pre-305-g4cd130da1 00:03:15.487 14:08:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:15.487 14:08:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:15.487 14:08:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:15.487 14:08:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:15.487 14:08:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.488 ************************************ 00:03:15.488 START TEST asan 00:03:15.488 ************************************ 00:03:15.488 using asan 00:03:15.488 14:08:40 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:15.488 00:03:15.488 real 0m0.000s 00:03:15.488 user 0m0.000s 00:03:15.488 sys 0m0.000s 00:03:15.488 ************************************ 00:03:15.488 14:08:40 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:15.488 14:08:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:15.488 END TEST asan 00:03:15.488 ************************************ 00:03:15.746 14:08:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:15.746 14:08:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:15.746 14:08:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:15.747 14:08:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:15.747 14:08:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.747 ************************************ 00:03:15.747 START TEST ubsan 00:03:15.747 ************************************ 00:03:15.747 using ubsan 00:03:15.747 14:08:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:15.747 00:03:15.747 real 0m0.000s 00:03:15.747 user 0m0.000s 00:03:15.747 sys 0m0.000s 00:03:15.747 14:08:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:15.747 ************************************ 00:03:15.747 END TEST ubsan 00:03:15.747 14:08:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:15.747 ************************************ 00:03:15.747 14:08:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:15.747 14:08:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:15.747 14:08:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:15.747 14:08:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:16.006 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:16.006 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:16.576 Using 'verbs' RDMA provider 00:03:32.407 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:50.558 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:50.558 Creating mk/config.mk...done. 00:03:50.558 Creating mk/cc.flags.mk...done. 00:03:50.558 Type 'make' to build. 00:03:50.558 14:09:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:50.558 14:09:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:50.558 14:09:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:50.558 14:09:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.558 ************************************ 00:03:50.558 START TEST make 00:03:50.558 ************************************ 00:03:50.558 14:09:13 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:50.558 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:50.558 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:50.558 meson setup builddir \ 00:03:50.558 -Dwith-libaio=enabled \ 00:03:50.558 -Dwith-liburing=enabled \ 00:03:50.558 -Dwith-libvfn=disabled \ 00:03:50.558 -Dwith-spdk=disabled \ 00:03:50.558 -Dexamples=false \ 00:03:50.558 -Dtests=false \ 00:03:50.558 -Dtools=false && \ 00:03:50.558 meson compile -C builddir && \ 00:03:50.558 cd -) 00:03:51.495 The Meson build system 00:03:51.495 Version: 1.5.0 00:03:51.495 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:51.495 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:51.495 Build type: native build 00:03:51.495 Project name: xnvme 00:03:51.495 Project version: 0.7.5 00:03:51.495 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:51.495 C linker for the host machine: cc ld.bfd 2.40-14 00:03:51.495 Host machine cpu family: x86_64 00:03:51.495 Host machine cpu: x86_64 00:03:51.495 Message: host_machine.system: linux 00:03:51.495 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:51.495 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:51.495 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:51.495 Run-time dependency threads found: YES 00:03:51.495 Has header "setupapi.h" : NO 00:03:51.495 Has header "linux/blkzoned.h" : YES 00:03:51.495 Has header "linux/blkzoned.h" : YES (cached) 00:03:51.495 Has header "libaio.h" : YES 00:03:51.495 Library aio found: YES 00:03:51.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:51.495 Run-time dependency liburing found: YES 2.2 00:03:51.495 Dependency libvfn skipped: feature with-libvfn disabled 00:03:51.495 Found CMake: /usr/bin/cmake (3.27.7) 00:03:51.495 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:51.495 Subproject spdk : skipped: feature with-spdk disabled 00:03:51.495 Run-time dependency appleframeworks found: NO (tried framework) 00:03:51.495 Run-time dependency appleframeworks found: NO (tried framework) 00:03:51.495 Library rt found: YES 00:03:51.495 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:51.495 Configuring xnvme_config.h using configuration 00:03:51.495 Configuring xnvme.spec using configuration 00:03:51.495 Run-time dependency bash-completion found: YES 2.11 00:03:51.495 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:51.495 Program cp found: YES (/usr/bin/cp) 00:03:51.495 Build targets in project: 3 00:03:51.495 00:03:51.495 xnvme 0.7.5 00:03:51.495 00:03:51.495 Subprojects 00:03:51.495 spdk : NO Feature 'with-spdk' disabled 00:03:51.495 00:03:51.495 User defined options 00:03:51.495 examples : false 00:03:51.495 tests : false 00:03:51.495 tools : false 00:03:51.495 with-libaio : enabled 00:03:51.495 with-liburing: enabled 00:03:51.495 with-libvfn : disabled 00:03:51.495 with-spdk : disabled 00:03:51.495 00:03:51.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:51.754 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:51.754 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:52.014 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:52.014 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:52.014 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:52.014 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:52.014 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:52.014 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:52.014 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:52.014 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:52.014 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:52.014 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:52.014 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:52.014 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:52.014 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:52.014 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:52.014 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:52.014 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:52.014 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:52.014 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:52.014 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:52.273 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:52.273 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:52.273 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:52.273 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:52.273 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:52.273 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:52.273 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:52.273 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:52.273 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:52.273 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:52.273 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:52.273 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:52.273 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:52.273 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:52.273 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:52.273 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:52.273 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:52.273 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:52.273 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:52.273 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:52.273 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:52.273 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:52.273 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:52.273 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:52.273 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:52.273 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:52.273 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:52.273 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:52.273 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:52.273 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:52.273 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:52.273 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:52.273 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:52.273 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:52.273 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:52.273 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:52.273 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:52.533 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:52.533 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:52.533 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:52.533 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:52.533 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:52.533 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:52.533 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:52.533 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:52.533 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:52.533 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:52.533 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:52.533 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:52.533 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:52.533 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:52.533 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:52.792 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:52.792 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:53.051 [75/76] Linking static target lib/libxnvme.a 00:03:53.051 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:53.051 INFO: autodetecting backend as ninja 00:03:53.051 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:53.051 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:59.658 The Meson build system 00:03:59.658 Version: 1.5.0 00:03:59.658 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:59.658 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:59.658 Build type: native build 00:03:59.658 Program cat found: YES (/usr/bin/cat) 00:03:59.658 Project name: DPDK 00:03:59.658 Project version: 24.03.0 00:03:59.658 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:59.658 C linker for the host machine: cc ld.bfd 2.40-14 00:03:59.658 Host machine cpu family: x86_64 00:03:59.658 Host machine cpu: x86_64 00:03:59.658 Message: ## Building in Developer Mode ## 00:03:59.658 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:59.658 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:59.658 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:59.658 Program python3 found: YES (/usr/bin/python3) 00:03:59.658 Program cat found: YES (/usr/bin/cat) 00:03:59.658 Compiler for C supports arguments -march=native: YES 00:03:59.658 Checking for size of "void *" : 8 00:03:59.658 Checking for size of "void *" : 8 (cached) 00:03:59.658 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:59.658 Library m found: YES 00:03:59.658 Library numa found: YES 00:03:59.658 Has header "numaif.h" : YES 00:03:59.658 Library fdt found: NO 00:03:59.658 Library execinfo found: NO 00:03:59.658 Has header "execinfo.h" : YES 00:03:59.658 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:59.658 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:59.658 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:59.658 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:59.658 Run-time dependency openssl found: YES 3.1.1 00:03:59.658 Run-time dependency libpcap found: YES 1.10.4 00:03:59.658 Has header "pcap.h" with dependency libpcap: YES 00:03:59.658 Compiler for C supports arguments -Wcast-qual: YES 00:03:59.658 Compiler for C supports arguments -Wdeprecated: YES 00:03:59.658 Compiler for C supports arguments -Wformat: YES 00:03:59.659 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:59.659 Compiler for C supports arguments -Wformat-security: NO 00:03:59.659 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.659 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:59.659 Compiler for C supports arguments -Wnested-externs: YES 00:03:59.659 Compiler for C supports arguments -Wold-style-definition: YES 00:03:59.659 Compiler for C supports arguments -Wpointer-arith: YES 00:03:59.659 Compiler for C supports arguments -Wsign-compare: YES 00:03:59.659 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:59.659 Compiler for C supports arguments -Wundef: YES 00:03:59.659 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.659 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:59.659 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:59.659 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.659 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:59.659 Program objdump found: YES (/usr/bin/objdump) 00:03:59.659 Compiler for C supports arguments -mavx512f: YES 00:03:59.659 Checking if "AVX512 checking" compiles: YES 00:03:59.659 Fetching value of define "__SSE4_2__" : 1 00:03:59.659 Fetching value of define "__AES__" : 1 00:03:59.659 Fetching value of define "__AVX__" : 1 00:03:59.659 Fetching value of define "__AVX2__" : 1 00:03:59.659 Fetching value of define "__AVX512BW__" : 1 00:03:59.659 Fetching value of define "__AVX512CD__" : 1 00:03:59.659 Fetching value of define "__AVX512DQ__" : 1 00:03:59.659 Fetching value of define "__AVX512F__" : 1 00:03:59.659 Fetching value of define "__AVX512VL__" : 1 00:03:59.659 Fetching value of define "__PCLMUL__" : 1 00:03:59.659 Fetching value of define "__RDRND__" : 1 00:03:59.659 Fetching value of define "__RDSEED__" : 1 00:03:59.659 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:59.659 Fetching value of define "__znver1__" : (undefined) 00:03:59.659 Fetching value of define "__znver2__" : (undefined) 00:03:59.659 Fetching value of define "__znver3__" : (undefined) 00:03:59.659 Fetching value of define "__znver4__" : (undefined) 00:03:59.659 Library asan found: YES 00:03:59.659 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:59.659 Message: lib/log: Defining dependency "log" 00:03:59.659 Message: lib/kvargs: Defining dependency "kvargs" 00:03:59.659 Message: lib/telemetry: Defining dependency "telemetry" 00:03:59.659 Library rt found: YES 00:03:59.659 Checking for function "getentropy" : NO 00:03:59.659 Message: lib/eal: Defining dependency "eal" 00:03:59.659 Message: lib/ring: Defining dependency "ring" 00:03:59.659 Message: lib/rcu: Defining dependency "rcu" 00:03:59.659 Message: lib/mempool: Defining dependency "mempool" 00:03:59.659 Message: lib/mbuf: Defining dependency "mbuf" 00:03:59.659 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:59.659 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:59.659 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:59.659 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:59.659 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:59.659 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:59.659 Compiler for C supports arguments -mpclmul: YES 00:03:59.659 Compiler for C supports arguments -maes: YES 00:03:59.659 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:59.659 Compiler for C supports arguments -mavx512bw: YES 00:03:59.659 Compiler for C supports arguments -mavx512dq: YES 00:03:59.659 Compiler for C supports arguments -mavx512vl: YES 00:03:59.659 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:59.659 Compiler for C supports arguments -mavx2: YES 00:03:59.659 Compiler for C supports arguments -mavx: YES 00:03:59.659 Message: lib/net: Defining dependency "net" 00:03:59.659 Message: lib/meter: Defining dependency "meter" 00:03:59.659 Message: lib/ethdev: Defining dependency "ethdev" 00:03:59.659 Message: lib/pci: Defining dependency "pci" 00:03:59.659 Message: lib/cmdline: Defining dependency "cmdline" 00:03:59.659 Message: lib/hash: Defining dependency "hash" 00:03:59.659 Message: lib/timer: Defining dependency "timer" 00:03:59.659 Message: lib/compressdev: Defining dependency "compressdev" 00:03:59.659 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:59.659 Message: lib/dmadev: Defining dependency "dmadev" 00:03:59.659 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:59.659 Message: lib/power: Defining dependency "power" 00:03:59.659 Message: lib/reorder: Defining dependency "reorder" 00:03:59.659 Message: lib/security: Defining dependency "security" 00:03:59.659 Has header "linux/userfaultfd.h" : YES 00:03:59.659 Has header "linux/vduse.h" : YES 00:03:59.659 Message: lib/vhost: Defining dependency "vhost" 00:03:59.659 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:59.659 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:59.659 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:59.659 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:59.659 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:59.659 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:59.659 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:59.659 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:59.659 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:59.659 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:59.659 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:59.659 Configuring doxy-api-html.conf using configuration 00:03:59.659 Configuring doxy-api-man.conf using configuration 00:03:59.659 Program mandb found: YES (/usr/bin/mandb) 00:03:59.659 Program sphinx-build found: NO 00:03:59.659 Configuring rte_build_config.h using configuration 00:03:59.659 Message: 00:03:59.659 ================= 00:03:59.659 Applications Enabled 00:03:59.659 ================= 00:03:59.659 00:03:59.659 apps: 00:03:59.659 00:03:59.659 00:03:59.659 Message: 00:03:59.659 ================= 00:03:59.659 Libraries Enabled 00:03:59.659 ================= 00:03:59.659 00:03:59.659 libs: 00:03:59.659 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:59.659 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:59.659 cryptodev, dmadev, power, reorder, security, vhost, 00:03:59.659 00:03:59.659 Message: 00:03:59.659 =============== 00:03:59.659 Drivers Enabled 00:03:59.659 =============== 00:03:59.659 00:03:59.659 common: 00:03:59.659 00:03:59.659 bus: 00:03:59.659 pci, vdev, 00:03:59.659 mempool: 00:03:59.659 ring, 00:03:59.659 dma: 00:03:59.659 00:03:59.659 net: 00:03:59.659 00:03:59.659 crypto: 00:03:59.659 00:03:59.659 compress: 00:03:59.659 00:03:59.659 vdpa: 00:03:59.659 00:03:59.659 00:03:59.659 Message: 00:03:59.659 ================= 00:03:59.659 Content Skipped 00:03:59.659 ================= 00:03:59.659 00:03:59.659 apps: 00:03:59.659 dumpcap: explicitly disabled via build config 00:03:59.659 graph: explicitly disabled via build config 00:03:59.659 pdump: explicitly disabled via build config 00:03:59.659 proc-info: explicitly disabled via build config 00:03:59.659 test-acl: explicitly disabled via build config 00:03:59.659 test-bbdev: explicitly disabled via build config 00:03:59.659 test-cmdline: explicitly disabled via build config 00:03:59.659 test-compress-perf: explicitly disabled via build config 00:03:59.659 test-crypto-perf: explicitly disabled via build config 00:03:59.659 test-dma-perf: explicitly disabled via build config 00:03:59.659 test-eventdev: explicitly disabled via build config 00:03:59.659 test-fib: explicitly disabled via build config 00:03:59.659 test-flow-perf: explicitly disabled via build config 00:03:59.659 test-gpudev: explicitly disabled via build config 00:03:59.659 test-mldev: explicitly disabled via build config 00:03:59.659 test-pipeline: explicitly disabled via build config 00:03:59.659 test-pmd: explicitly disabled via build config 00:03:59.659 test-regex: explicitly disabled via build config 00:03:59.659 test-sad: explicitly disabled via build config 00:03:59.659 test-security-perf: explicitly disabled via build config 00:03:59.659 00:03:59.659 libs: 00:03:59.659 argparse: explicitly disabled via build config 00:03:59.659 metrics: explicitly disabled via build config 00:03:59.659 acl: explicitly disabled via build config 00:03:59.659 bbdev: explicitly disabled via build config 00:03:59.659 bitratestats: explicitly disabled via build config 00:03:59.659 bpf: explicitly disabled via build config 00:03:59.659 cfgfile: explicitly disabled via build config 00:03:59.659 distributor: explicitly disabled via build config 00:03:59.659 efd: explicitly disabled via build config 00:03:59.659 eventdev: explicitly disabled via build config 00:03:59.659 dispatcher: explicitly disabled via build config 00:03:59.659 gpudev: explicitly disabled via build config 00:03:59.659 gro: explicitly disabled via build config 00:03:59.659 gso: explicitly disabled via build config 00:03:59.659 ip_frag: explicitly disabled via build config 00:03:59.659 jobstats: explicitly disabled via build config 00:03:59.659 latencystats: explicitly disabled via build config 00:03:59.659 lpm: explicitly disabled via build config 00:03:59.659 member: explicitly disabled via build config 00:03:59.659 pcapng: explicitly disabled via build config 00:03:59.659 rawdev: explicitly disabled via build config 00:03:59.659 regexdev: explicitly disabled via build config 00:03:59.659 mldev: explicitly disabled via build config 00:03:59.659 rib: explicitly disabled via build config 00:03:59.659 sched: explicitly disabled via build config 00:03:59.659 stack: explicitly disabled via build config 00:03:59.659 ipsec: explicitly disabled via build config 00:03:59.659 pdcp: explicitly disabled via build config 00:03:59.659 fib: explicitly disabled via build config 00:03:59.659 port: explicitly disabled via build config 00:03:59.659 pdump: explicitly disabled via build config 00:03:59.659 table: explicitly disabled via build config 00:03:59.659 pipeline: explicitly disabled via build config 00:03:59.659 graph: explicitly disabled via build config 00:03:59.659 node: explicitly disabled via build config 00:03:59.659 00:03:59.659 drivers: 00:03:59.659 common/cpt: not in enabled drivers build config 00:03:59.659 common/dpaax: not in enabled drivers build config 00:03:59.659 common/iavf: not in enabled drivers build config 00:03:59.659 common/idpf: not in enabled drivers build config 00:03:59.659 common/ionic: not in enabled drivers build config 00:03:59.659 common/mvep: not in enabled drivers build config 00:03:59.659 common/octeontx: not in enabled drivers build config 00:03:59.659 bus/auxiliary: not in enabled drivers build config 00:03:59.659 bus/cdx: not in enabled drivers build config 00:03:59.660 bus/dpaa: not in enabled drivers build config 00:03:59.660 bus/fslmc: not in enabled drivers build config 00:03:59.660 bus/ifpga: not in enabled drivers build config 00:03:59.660 bus/platform: not in enabled drivers build config 00:03:59.660 bus/uacce: not in enabled drivers build config 00:03:59.660 bus/vmbus: not in enabled drivers build config 00:03:59.660 common/cnxk: not in enabled drivers build config 00:03:59.660 common/mlx5: not in enabled drivers build config 00:03:59.660 common/nfp: not in enabled drivers build config 00:03:59.660 common/nitrox: not in enabled drivers build config 00:03:59.660 common/qat: not in enabled drivers build config 00:03:59.660 common/sfc_efx: not in enabled drivers build config 00:03:59.660 mempool/bucket: not in enabled drivers build config 00:03:59.660 mempool/cnxk: not in enabled drivers build config 00:03:59.660 mempool/dpaa: not in enabled drivers build config 00:03:59.660 mempool/dpaa2: not in enabled drivers build config 00:03:59.660 mempool/octeontx: not in enabled drivers build config 00:03:59.660 mempool/stack: not in enabled drivers build config 00:03:59.660 dma/cnxk: not in enabled drivers build config 00:03:59.660 dma/dpaa: not in enabled drivers build config 00:03:59.660 dma/dpaa2: not in enabled drivers build config 00:03:59.660 dma/hisilicon: not in enabled drivers build config 00:03:59.660 dma/idxd: not in enabled drivers build config 00:03:59.660 dma/ioat: not in enabled drivers build config 00:03:59.660 dma/skeleton: not in enabled drivers build config 00:03:59.660 net/af_packet: not in enabled drivers build config 00:03:59.660 net/af_xdp: not in enabled drivers build config 00:03:59.660 net/ark: not in enabled drivers build config 00:03:59.660 net/atlantic: not in enabled drivers build config 00:03:59.660 net/avp: not in enabled drivers build config 00:03:59.660 net/axgbe: not in enabled drivers build config 00:03:59.660 net/bnx2x: not in enabled drivers build config 00:03:59.660 net/bnxt: not in enabled drivers build config 00:03:59.660 net/bonding: not in enabled drivers build config 00:03:59.660 net/cnxk: not in enabled drivers build config 00:03:59.660 net/cpfl: not in enabled drivers build config 00:03:59.660 net/cxgbe: not in enabled drivers build config 00:03:59.660 net/dpaa: not in enabled drivers build config 00:03:59.660 net/dpaa2: not in enabled drivers build config 00:03:59.660 net/e1000: not in enabled drivers build config 00:03:59.660 net/ena: not in enabled drivers build config 00:03:59.660 net/enetc: not in enabled drivers build config 00:03:59.660 net/enetfec: not in enabled drivers build config 00:03:59.660 net/enic: not in enabled drivers build config 00:03:59.660 net/failsafe: not in enabled drivers build config 00:03:59.660 net/fm10k: not in enabled drivers build config 00:03:59.660 net/gve: not in enabled drivers build config 00:03:59.660 net/hinic: not in enabled drivers build config 00:03:59.660 net/hns3: not in enabled drivers build config 00:03:59.660 net/i40e: not in enabled drivers build config 00:03:59.660 net/iavf: not in enabled drivers build config 00:03:59.660 net/ice: not in enabled drivers build config 00:03:59.660 net/idpf: not in enabled drivers build config 00:03:59.660 net/igc: not in enabled drivers build config 00:03:59.660 net/ionic: not in enabled drivers build config 00:03:59.660 net/ipn3ke: not in enabled drivers build config 00:03:59.660 net/ixgbe: not in enabled drivers build config 00:03:59.660 net/mana: not in enabled drivers build config 00:03:59.660 net/memif: not in enabled drivers build config 00:03:59.660 net/mlx4: not in enabled drivers build config 00:03:59.660 net/mlx5: not in enabled drivers build config 00:03:59.660 net/mvneta: not in enabled drivers build config 00:03:59.660 net/mvpp2: not in enabled drivers build config 00:03:59.660 net/netvsc: not in enabled drivers build config 00:03:59.660 net/nfb: not in enabled drivers build config 00:03:59.660 net/nfp: not in enabled drivers build config 00:03:59.660 net/ngbe: not in enabled drivers build config 00:03:59.660 net/null: not in enabled drivers build config 00:03:59.660 net/octeontx: not in enabled drivers build config 00:03:59.660 net/octeon_ep: not in enabled drivers build config 00:03:59.660 net/pcap: not in enabled drivers build config 00:03:59.660 net/pfe: not in enabled drivers build config 00:03:59.660 net/qede: not in enabled drivers build config 00:03:59.660 net/ring: not in enabled drivers build config 00:03:59.660 net/sfc: not in enabled drivers build config 00:03:59.660 net/softnic: not in enabled drivers build config 00:03:59.660 net/tap: not in enabled drivers build config 00:03:59.660 net/thunderx: not in enabled drivers build config 00:03:59.660 net/txgbe: not in enabled drivers build config 00:03:59.660 net/vdev_netvsc: not in enabled drivers build config 00:03:59.660 net/vhost: not in enabled drivers build config 00:03:59.660 net/virtio: not in enabled drivers build config 00:03:59.660 net/vmxnet3: not in enabled drivers build config 00:03:59.660 raw/*: missing internal dependency, "rawdev" 00:03:59.660 crypto/armv8: not in enabled drivers build config 00:03:59.660 crypto/bcmfs: not in enabled drivers build config 00:03:59.660 crypto/caam_jr: not in enabled drivers build config 00:03:59.660 crypto/ccp: not in enabled drivers build config 00:03:59.660 crypto/cnxk: not in enabled drivers build config 00:03:59.660 crypto/dpaa_sec: not in enabled drivers build config 00:03:59.660 crypto/dpaa2_sec: not in enabled drivers build config 00:03:59.660 crypto/ipsec_mb: not in enabled drivers build config 00:03:59.660 crypto/mlx5: not in enabled drivers build config 00:03:59.660 crypto/mvsam: not in enabled drivers build config 00:03:59.660 crypto/nitrox: not in enabled drivers build config 00:03:59.660 crypto/null: not in enabled drivers build config 00:03:59.660 crypto/octeontx: not in enabled drivers build config 00:03:59.660 crypto/openssl: not in enabled drivers build config 00:03:59.660 crypto/scheduler: not in enabled drivers build config 00:03:59.660 crypto/uadk: not in enabled drivers build config 00:03:59.660 crypto/virtio: not in enabled drivers build config 00:03:59.660 compress/isal: not in enabled drivers build config 00:03:59.660 compress/mlx5: not in enabled drivers build config 00:03:59.660 compress/nitrox: not in enabled drivers build config 00:03:59.660 compress/octeontx: not in enabled drivers build config 00:03:59.660 compress/zlib: not in enabled drivers build config 00:03:59.660 regex/*: missing internal dependency, "regexdev" 00:03:59.660 ml/*: missing internal dependency, "mldev" 00:03:59.660 vdpa/ifc: not in enabled drivers build config 00:03:59.660 vdpa/mlx5: not in enabled drivers build config 00:03:59.660 vdpa/nfp: not in enabled drivers build config 00:03:59.660 vdpa/sfc: not in enabled drivers build config 00:03:59.660 event/*: missing internal dependency, "eventdev" 00:03:59.660 baseband/*: missing internal dependency, "bbdev" 00:03:59.660 gpu/*: missing internal dependency, "gpudev" 00:03:59.660 00:03:59.660 00:03:59.931 Build targets in project: 85 00:03:59.931 00:03:59.931 DPDK 24.03.0 00:03:59.931 00:03:59.931 User defined options 00:03:59.931 buildtype : debug 00:03:59.931 default_library : shared 00:03:59.931 libdir : lib 00:03:59.931 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:59.931 b_sanitize : address 00:03:59.931 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:59.931 c_link_args : 00:03:59.931 cpu_instruction_set: native 00:03:59.931 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:59.931 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:59.931 enable_docs : false 00:03:59.931 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:59.931 enable_kmods : false 00:03:59.931 max_lcores : 128 00:03:59.931 tests : false 00:03:59.931 00:03:59.931 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.498 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:00.498 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:00.498 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:00.498 [3/268] Linking static target lib/librte_kvargs.a 00:04:00.498 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:00.756 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:00.756 [6/268] Linking static target lib/librte_log.a 00:04:01.015 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:01.015 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:01.015 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:01.015 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:01.015 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.015 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:01.015 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:01.015 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:01.015 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:01.274 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:01.274 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:01.274 [18/268] Linking static target lib/librte_telemetry.a 00:04:01.533 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:01.533 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:01.533 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.533 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:01.533 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:01.792 [24/268] Linking target lib/librte_log.so.24.1 00:04:01.792 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:01.792 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:01.792 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:01.792 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:01.792 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:01.792 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:01.792 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:01.792 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.052 [33/268] Linking target lib/librte_kvargs.so.24.1 00:04:02.052 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:02.052 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:02.052 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:02.311 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:02.311 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:02.311 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:02.311 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:02.311 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:02.311 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:02.311 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:02.311 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:02.311 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:02.311 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:02.570 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:02.570 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:02.829 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:02.829 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:02.829 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:02.829 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:02.829 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:02.829 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:02.829 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:02.829 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:03.088 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:03.347 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:03.347 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:03.347 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:03.347 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:03.347 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:03.347 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:03.347 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:03.347 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:03.347 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:03.347 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:03.606 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:03.865 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:03.865 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:03.865 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:03.865 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:03.865 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:03.865 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:03.865 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:03.865 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:03.865 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:04.125 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:04.125 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:04.125 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:04.125 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:04.125 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:04.383 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:04.383 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:04.383 [85/268] Linking static target lib/librte_eal.a 00:04:04.383 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:04.383 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:04.642 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:04.642 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:04.642 [90/268] Linking static target lib/librte_rcu.a 00:04:04.642 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:04.642 [92/268] Linking static target lib/librte_ring.a 00:04:04.642 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:04.642 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:04.642 [95/268] Linking static target lib/librte_mempool.a 00:04:04.901 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:04.901 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:04.901 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:04.901 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:05.161 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:05.161 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.161 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:05.161 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.161 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:05.161 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:05.161 [106/268] Linking static target lib/librte_mbuf.a 00:04:05.420 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:05.420 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:05.420 [109/268] Linking static target lib/librte_net.a 00:04:05.420 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:05.420 [111/268] Linking static target lib/librte_meter.a 00:04:05.679 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:05.679 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:05.679 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:05.938 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.938 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:05.938 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.938 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.198 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:06.198 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:06.198 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:06.198 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.198 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:06.457 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:06.457 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:06.457 [126/268] Linking static target lib/librte_pci.a 00:04:06.716 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:06.716 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:06.716 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:06.716 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:06.716 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:06.975 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:06.975 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:06.975 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.975 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:06.975 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:06.975 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:06.975 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:06.975 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:06.975 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:07.234 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:07.234 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:07.234 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:07.234 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:07.234 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:07.234 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:07.234 [147/268] Linking static target lib/librte_cmdline.a 00:04:07.494 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:07.494 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:07.494 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:07.753 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:07.753 [152/268] Linking static target lib/librte_timer.a 00:04:07.753 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.013 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:08.013 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:08.013 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:08.013 [157/268] Linking static target lib/librte_ethdev.a 00:04:08.013 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:08.013 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:08.013 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:08.013 [161/268] Linking static target lib/librte_hash.a 00:04:08.272 [162/268] Linking static target lib/librte_compressdev.a 00:04:08.272 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:08.272 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.272 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:08.272 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:08.532 [167/268] Linking static target lib/librte_dmadev.a 00:04:08.532 [168/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:08.532 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:08.791 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:08.791 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:08.791 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:09.051 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:09.051 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.051 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.310 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:09.310 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:09.310 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:09.310 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.310 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.310 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:09.310 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:09.578 [183/268] Linking static target lib/librte_cryptodev.a 00:04:09.578 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:09.578 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:09.579 [186/268] Linking static target lib/librte_power.a 00:04:09.849 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:09.849 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:09.849 [189/268] Linking static target lib/librte_reorder.a 00:04:09.849 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:09.849 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:10.109 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:10.109 [193/268] Linking static target lib/librte_security.a 00:04:10.368 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:10.627 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.885 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:10.885 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.885 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:10.885 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.144 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:11.144 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:11.402 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:11.402 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:11.402 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:11.402 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:11.660 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:11.660 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:11.660 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:11.660 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:11.919 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:11.919 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:11.919 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:11.919 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:11.919 [214/268] Linking static target drivers/librte_bus_pci.a 00:04:11.919 [215/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.177 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:12.177 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.177 [218/268] Linking static target drivers/librte_bus_vdev.a 00:04:12.177 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.177 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:12.177 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:12.436 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:12.436 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:12.436 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:12.436 [225/268] Linking static target drivers/librte_mempool_ring.a 00:04:12.436 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.696 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.635 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:16.927 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:16.927 [230/268] Linking static target lib/librte_vhost.a 00:04:17.186 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.186 [232/268] Linking target lib/librte_eal.so.24.1 00:04:17.446 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:17.446 [234/268] Linking target lib/librte_ring.so.24.1 00:04:17.446 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:17.446 [236/268] Linking target lib/librte_pci.so.24.1 00:04:17.446 [237/268] Linking target lib/librte_meter.so.24.1 00:04:17.446 [238/268] Linking target lib/librte_timer.so.24.1 00:04:17.446 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:17.706 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:17.706 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:17.706 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:17.706 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:17.706 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:17.706 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:17.706 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:17.706 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:17.706 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.706 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:17.706 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:17.966 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:17.966 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:17.966 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:17.966 [254/268] Linking target lib/librte_reorder.so.24.1 00:04:17.966 [255/268] Linking target lib/librte_net.so.24.1 00:04:17.966 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:17.966 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:04:18.226 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:18.226 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:18.226 [260/268] Linking target lib/librte_security.so.24.1 00:04:18.226 [261/268] Linking target lib/librte_hash.so.24.1 00:04:18.226 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:18.226 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:18.485 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:18.485 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:18.485 [266/268] Linking target lib/librte_power.so.24.1 00:04:19.054 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.313 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:19.313 INFO: autodetecting backend as ninja 00:04:19.313 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:37.430 CC lib/ut/ut.o 00:04:37.430 CC lib/log/log.o 00:04:37.430 CC lib/log/log_flags.o 00:04:37.430 CC lib/ut_mock/mock.o 00:04:37.430 CC lib/log/log_deprecated.o 00:04:37.430 LIB libspdk_ut.a 00:04:37.430 LIB libspdk_ut_mock.a 00:04:37.430 LIB libspdk_log.a 00:04:37.430 SO libspdk_ut.so.2.0 00:04:37.430 SO libspdk_ut_mock.so.6.0 00:04:37.430 SO libspdk_log.so.7.1 00:04:37.430 SYMLINK libspdk_ut.so 00:04:37.430 SYMLINK libspdk_ut_mock.so 00:04:37.430 SYMLINK libspdk_log.so 00:04:37.430 CC lib/util/base64.o 00:04:37.430 CC lib/util/crc16.o 00:04:37.430 CC lib/dma/dma.o 00:04:37.430 CC lib/util/bit_array.o 00:04:37.430 CC lib/util/cpuset.o 00:04:37.430 CC lib/util/crc32.o 00:04:37.430 CC lib/util/crc32c.o 00:04:37.430 CXX lib/trace_parser/trace.o 00:04:37.430 CC lib/ioat/ioat.o 00:04:37.430 CC lib/vfio_user/host/vfio_user_pci.o 00:04:37.430 CC lib/util/crc32_ieee.o 00:04:37.430 CC lib/util/crc64.o 00:04:37.430 CC lib/util/dif.o 00:04:37.430 CC lib/util/fd.o 00:04:37.430 CC lib/util/fd_group.o 00:04:37.430 LIB libspdk_dma.a 00:04:37.430 CC lib/util/file.o 00:04:37.430 SO libspdk_dma.so.5.0 00:04:37.430 CC lib/util/hexlify.o 00:04:37.430 CC lib/util/iov.o 00:04:37.430 SYMLINK libspdk_dma.so 00:04:37.430 CC lib/vfio_user/host/vfio_user.o 00:04:37.430 CC lib/util/math.o 00:04:37.430 LIB libspdk_ioat.a 00:04:37.430 SO libspdk_ioat.so.7.0 00:04:37.430 CC lib/util/net.o 00:04:37.430 CC lib/util/pipe.o 00:04:37.430 SYMLINK libspdk_ioat.so 00:04:37.430 CC lib/util/strerror_tls.o 00:04:37.430 CC lib/util/string.o 00:04:37.430 CC lib/util/uuid.o 00:04:37.689 CC lib/util/xor.o 00:04:37.689 CC lib/util/zipf.o 00:04:37.689 LIB libspdk_vfio_user.a 00:04:37.689 CC lib/util/md5.o 00:04:37.689 SO libspdk_vfio_user.so.5.0 00:04:37.689 SYMLINK libspdk_vfio_user.so 00:04:37.948 LIB libspdk_util.a 00:04:38.207 LIB libspdk_trace_parser.a 00:04:38.207 SO libspdk_util.so.10.1 00:04:38.207 SO libspdk_trace_parser.so.6.0 00:04:38.207 SYMLINK libspdk_trace_parser.so 00:04:38.207 SYMLINK libspdk_util.so 00:04:38.466 CC lib/json/json_parse.o 00:04:38.466 CC lib/json/json_util.o 00:04:38.466 CC lib/json/json_write.o 00:04:38.466 CC lib/vmd/vmd.o 00:04:38.466 CC lib/vmd/led.o 00:04:38.466 CC lib/conf/conf.o 00:04:38.466 CC lib/env_dpdk/env.o 00:04:38.466 CC lib/idxd/idxd.o 00:04:38.466 CC lib/idxd/idxd_user.o 00:04:38.466 CC lib/rdma_utils/rdma_utils.o 00:04:38.725 CC lib/idxd/idxd_kernel.o 00:04:38.725 LIB libspdk_conf.a 00:04:38.725 CC lib/env_dpdk/memory.o 00:04:38.725 CC lib/env_dpdk/pci.o 00:04:38.725 CC lib/env_dpdk/init.o 00:04:38.725 SO libspdk_conf.so.6.0 00:04:38.725 LIB libspdk_rdma_utils.a 00:04:38.725 LIB libspdk_json.a 00:04:38.725 SO libspdk_rdma_utils.so.1.0 00:04:38.725 SO libspdk_json.so.6.0 00:04:38.725 SYMLINK libspdk_conf.so 00:04:38.725 CC lib/env_dpdk/threads.o 00:04:38.984 CC lib/env_dpdk/pci_ioat.o 00:04:38.984 SYMLINK libspdk_rdma_utils.so 00:04:38.984 SYMLINK libspdk_json.so 00:04:38.984 CC lib/env_dpdk/pci_virtio.o 00:04:38.984 CC lib/env_dpdk/pci_vmd.o 00:04:38.985 CC lib/env_dpdk/pci_idxd.o 00:04:38.985 CC lib/env_dpdk/pci_event.o 00:04:38.985 CC lib/rdma_provider/common.o 00:04:38.985 CC lib/env_dpdk/sigbus_handler.o 00:04:39.244 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.244 CC lib/env_dpdk/pci_dpdk.o 00:04:39.244 LIB libspdk_idxd.a 00:04:39.244 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.244 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.244 SO libspdk_idxd.so.12.1 00:04:39.244 LIB libspdk_vmd.a 00:04:39.244 SO libspdk_vmd.so.6.0 00:04:39.244 SYMLINK libspdk_idxd.so 00:04:39.244 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.244 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.244 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.244 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.244 SYMLINK libspdk_vmd.so 00:04:39.244 LIB libspdk_rdma_provider.a 00:04:39.503 SO libspdk_rdma_provider.so.7.0 00:04:39.503 SYMLINK libspdk_rdma_provider.so 00:04:39.503 LIB libspdk_jsonrpc.a 00:04:39.763 SO libspdk_jsonrpc.so.6.0 00:04:39.763 SYMLINK libspdk_jsonrpc.so 00:04:40.022 LIB libspdk_env_dpdk.a 00:04:40.022 SO libspdk_env_dpdk.so.15.1 00:04:40.282 CC lib/rpc/rpc.o 00:04:40.282 SYMLINK libspdk_env_dpdk.so 00:04:40.541 LIB libspdk_rpc.a 00:04:40.541 SO libspdk_rpc.so.6.0 00:04:40.541 SYMLINK libspdk_rpc.so 00:04:41.110 CC lib/trace/trace.o 00:04:41.110 CC lib/trace/trace_flags.o 00:04:41.110 CC lib/trace/trace_rpc.o 00:04:41.110 CC lib/notify/notify.o 00:04:41.110 CC lib/notify/notify_rpc.o 00:04:41.110 CC lib/keyring/keyring.o 00:04:41.110 CC lib/keyring/keyring_rpc.o 00:04:41.110 LIB libspdk_notify.a 00:04:41.110 SO libspdk_notify.so.6.0 00:04:41.369 LIB libspdk_trace.a 00:04:41.369 LIB libspdk_keyring.a 00:04:41.369 SYMLINK libspdk_notify.so 00:04:41.369 SO libspdk_trace.so.11.0 00:04:41.369 SO libspdk_keyring.so.2.0 00:04:41.369 SYMLINK libspdk_trace.so 00:04:41.369 SYMLINK libspdk_keyring.so 00:04:41.937 CC lib/thread/thread.o 00:04:41.937 CC lib/thread/iobuf.o 00:04:41.937 CC lib/sock/sock.o 00:04:41.937 CC lib/sock/sock_rpc.o 00:04:42.196 LIB libspdk_sock.a 00:04:42.455 SO libspdk_sock.so.10.0 00:04:42.455 SYMLINK libspdk_sock.so 00:04:43.023 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:43.023 CC lib/nvme/nvme_ctrlr.o 00:04:43.023 CC lib/nvme/nvme_fabric.o 00:04:43.023 CC lib/nvme/nvme_ns_cmd.o 00:04:43.023 CC lib/nvme/nvme_ns.o 00:04:43.023 CC lib/nvme/nvme_pcie_common.o 00:04:43.023 CC lib/nvme/nvme_pcie.o 00:04:43.023 CC lib/nvme/nvme_qpair.o 00:04:43.023 CC lib/nvme/nvme.o 00:04:43.281 LIB libspdk_thread.a 00:04:43.541 SO libspdk_thread.so.11.0 00:04:43.541 CC lib/nvme/nvme_quirks.o 00:04:43.541 SYMLINK libspdk_thread.so 00:04:43.541 CC lib/nvme/nvme_transport.o 00:04:43.541 CC lib/nvme/nvme_discovery.o 00:04:43.541 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:43.541 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:43.541 CC lib/nvme/nvme_tcp.o 00:04:43.801 CC lib/nvme/nvme_opal.o 00:04:43.801 CC lib/nvme/nvme_io_msg.o 00:04:44.060 CC lib/accel/accel.o 00:04:44.060 CC lib/blob/blobstore.o 00:04:44.060 CC lib/blob/request.o 00:04:44.320 CC lib/init/json_config.o 00:04:44.320 CC lib/virtio/virtio.o 00:04:44.320 CC lib/blob/zeroes.o 00:04:44.320 CC lib/blob/blob_bs_dev.o 00:04:44.579 CC lib/init/subsystem.o 00:04:44.579 CC lib/accel/accel_rpc.o 00:04:44.579 CC lib/accel/accel_sw.o 00:04:44.579 CC lib/virtio/virtio_vhost_user.o 00:04:44.579 CC lib/virtio/virtio_vfio_user.o 00:04:44.579 CC lib/fsdev/fsdev.o 00:04:44.579 CC lib/init/subsystem_rpc.o 00:04:44.579 CC lib/init/rpc.o 00:04:44.839 CC lib/virtio/virtio_pci.o 00:04:44.839 CC lib/nvme/nvme_poll_group.o 00:04:44.839 CC lib/nvme/nvme_zns.o 00:04:44.839 CC lib/nvme/nvme_stubs.o 00:04:44.839 LIB libspdk_init.a 00:04:44.839 SO libspdk_init.so.6.0 00:04:45.098 SYMLINK libspdk_init.so 00:04:45.098 CC lib/fsdev/fsdev_io.o 00:04:45.098 LIB libspdk_virtio.a 00:04:45.098 SO libspdk_virtio.so.7.0 00:04:45.098 CC lib/nvme/nvme_auth.o 00:04:45.098 SYMLINK libspdk_virtio.so 00:04:45.098 CC lib/fsdev/fsdev_rpc.o 00:04:45.358 CC lib/nvme/nvme_cuse.o 00:04:45.358 CC lib/nvme/nvme_rdma.o 00:04:45.358 CC lib/event/app.o 00:04:45.358 LIB libspdk_accel.a 00:04:45.358 CC lib/event/reactor.o 00:04:45.358 SO libspdk_accel.so.16.0 00:04:45.358 LIB libspdk_fsdev.a 00:04:45.358 SO libspdk_fsdev.so.2.0 00:04:45.358 CC lib/event/log_rpc.o 00:04:45.358 CC lib/event/app_rpc.o 00:04:45.358 SYMLINK libspdk_accel.so 00:04:45.358 CC lib/event/scheduler_static.o 00:04:45.358 SYMLINK libspdk_fsdev.so 00:04:45.617 CC lib/bdev/bdev.o 00:04:45.617 CC lib/bdev/bdev_rpc.o 00:04:45.617 CC lib/bdev/bdev_zone.o 00:04:45.617 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:45.876 CC lib/bdev/part.o 00:04:45.876 LIB libspdk_event.a 00:04:45.876 SO libspdk_event.so.14.0 00:04:45.876 CC lib/bdev/scsi_nvme.o 00:04:45.876 SYMLINK libspdk_event.so 00:04:46.445 LIB libspdk_fuse_dispatcher.a 00:04:46.445 SO libspdk_fuse_dispatcher.so.1.0 00:04:46.445 SYMLINK libspdk_fuse_dispatcher.so 00:04:46.771 LIB libspdk_nvme.a 00:04:47.030 SO libspdk_nvme.so.15.0 00:04:47.290 SYMLINK libspdk_nvme.so 00:04:47.858 LIB libspdk_blob.a 00:04:47.858 SO libspdk_blob.so.12.0 00:04:47.858 SYMLINK libspdk_blob.so 00:04:48.426 CC lib/blobfs/blobfs.o 00:04:48.426 CC lib/blobfs/tree.o 00:04:48.426 CC lib/lvol/lvol.o 00:04:48.994 LIB libspdk_bdev.a 00:04:48.994 SO libspdk_bdev.so.17.0 00:04:48.994 SYMLINK libspdk_bdev.so 00:04:49.253 LIB libspdk_blobfs.a 00:04:49.253 CC lib/scsi/port.o 00:04:49.253 CC lib/scsi/lun.o 00:04:49.253 CC lib/scsi/scsi.o 00:04:49.253 CC lib/scsi/dev.o 00:04:49.253 CC lib/ublk/ublk.o 00:04:49.253 CC lib/nvmf/ctrlr.o 00:04:49.253 SO libspdk_blobfs.so.11.0 00:04:49.253 CC lib/ftl/ftl_core.o 00:04:49.253 CC lib/nbd/nbd.o 00:04:49.253 LIB libspdk_lvol.a 00:04:49.512 SO libspdk_lvol.so.11.0 00:04:49.512 SYMLINK libspdk_blobfs.so 00:04:49.512 CC lib/ftl/ftl_init.o 00:04:49.512 CC lib/ftl/ftl_layout.o 00:04:49.512 SYMLINK libspdk_lvol.so 00:04:49.512 CC lib/ublk/ublk_rpc.o 00:04:49.512 CC lib/scsi/scsi_bdev.o 00:04:49.512 CC lib/scsi/scsi_pr.o 00:04:49.771 CC lib/scsi/scsi_rpc.o 00:04:49.771 CC lib/ftl/ftl_debug.o 00:04:49.771 CC lib/ftl/ftl_io.o 00:04:49.771 CC lib/nbd/nbd_rpc.o 00:04:49.771 CC lib/scsi/task.o 00:04:49.771 CC lib/nvmf/ctrlr_discovery.o 00:04:49.771 CC lib/ftl/ftl_sb.o 00:04:49.771 CC lib/ftl/ftl_l2p.o 00:04:50.030 CC lib/ftl/ftl_l2p_flat.o 00:04:50.030 CC lib/ftl/ftl_nv_cache.o 00:04:50.030 LIB libspdk_nbd.a 00:04:50.030 SO libspdk_nbd.so.7.0 00:04:50.030 LIB libspdk_ublk.a 00:04:50.030 CC lib/ftl/ftl_band.o 00:04:50.030 CC lib/ftl/ftl_band_ops.o 00:04:50.030 SO libspdk_ublk.so.3.0 00:04:50.030 SYMLINK libspdk_nbd.so 00:04:50.030 CC lib/ftl/ftl_writer.o 00:04:50.030 LIB libspdk_scsi.a 00:04:50.030 CC lib/ftl/ftl_rq.o 00:04:50.030 SYMLINK libspdk_ublk.so 00:04:50.030 CC lib/ftl/ftl_reloc.o 00:04:50.030 CC lib/nvmf/ctrlr_bdev.o 00:04:50.030 SO libspdk_scsi.so.9.0 00:04:50.288 SYMLINK libspdk_scsi.so 00:04:50.288 CC lib/ftl/ftl_l2p_cache.o 00:04:50.288 CC lib/ftl/ftl_p2l.o 00:04:50.288 CC lib/nvmf/subsystem.o 00:04:50.546 CC lib/ftl/ftl_p2l_log.o 00:04:50.546 CC lib/iscsi/conn.o 00:04:50.546 CC lib/vhost/vhost.o 00:04:50.546 CC lib/vhost/vhost_rpc.o 00:04:50.805 CC lib/vhost/vhost_scsi.o 00:04:50.805 CC lib/vhost/vhost_blk.o 00:04:50.805 CC lib/vhost/rte_vhost_user.o 00:04:50.805 CC lib/iscsi/init_grp.o 00:04:51.064 CC lib/ftl/mngt/ftl_mngt.o 00:04:51.064 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:51.064 CC lib/iscsi/iscsi.o 00:04:51.064 CC lib/iscsi/param.o 00:04:51.323 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:51.323 CC lib/nvmf/nvmf.o 00:04:51.323 CC lib/nvmf/nvmf_rpc.o 00:04:51.323 CC lib/nvmf/transport.o 00:04:51.583 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:51.583 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:51.583 CC lib/nvmf/tcp.o 00:04:51.583 CC lib/nvmf/stubs.o 00:04:51.843 CC lib/nvmf/mdns_server.o 00:04:51.843 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:51.843 LIB libspdk_vhost.a 00:04:51.843 CC lib/iscsi/portal_grp.o 00:04:51.843 SO libspdk_vhost.so.8.0 00:04:52.102 SYMLINK libspdk_vhost.so 00:04:52.102 CC lib/iscsi/tgt_node.o 00:04:52.102 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:52.102 CC lib/nvmf/rdma.o 00:04:52.102 CC lib/nvmf/auth.o 00:04:52.102 CC lib/iscsi/iscsi_subsystem.o 00:04:52.361 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:52.361 CC lib/iscsi/iscsi_rpc.o 00:04:52.361 CC lib/iscsi/task.o 00:04:52.361 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:52.361 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:52.361 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:52.620 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:52.620 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:52.620 CC lib/ftl/utils/ftl_conf.o 00:04:52.620 CC lib/ftl/utils/ftl_md.o 00:04:52.620 CC lib/ftl/utils/ftl_mempool.o 00:04:52.620 CC lib/ftl/utils/ftl_bitmap.o 00:04:52.620 LIB libspdk_iscsi.a 00:04:52.879 CC lib/ftl/utils/ftl_property.o 00:04:52.879 SO libspdk_iscsi.so.8.0 00:04:52.879 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:52.879 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:52.879 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:52.879 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:52.879 SYMLINK libspdk_iscsi.so 00:04:53.139 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:53.139 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:53.139 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:53.139 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:53.139 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:53.139 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:53.139 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:53.139 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:53.139 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:53.139 CC lib/ftl/base/ftl_base_dev.o 00:04:53.139 CC lib/ftl/base/ftl_base_bdev.o 00:04:53.398 CC lib/ftl/ftl_trace.o 00:04:53.657 LIB libspdk_ftl.a 00:04:53.915 SO libspdk_ftl.so.9.0 00:04:54.175 SYMLINK libspdk_ftl.so 00:04:54.434 LIB libspdk_nvmf.a 00:04:54.693 SO libspdk_nvmf.so.20.0 00:04:54.952 SYMLINK libspdk_nvmf.so 00:04:55.577 CC module/env_dpdk/env_dpdk_rpc.o 00:04:55.577 CC module/accel/error/accel_error.o 00:04:55.577 CC module/accel/iaa/accel_iaa.o 00:04:55.577 CC module/blob/bdev/blob_bdev.o 00:04:55.577 CC module/keyring/file/keyring.o 00:04:55.577 CC module/accel/ioat/accel_ioat.o 00:04:55.577 CC module/fsdev/aio/fsdev_aio.o 00:04:55.577 CC module/sock/posix/posix.o 00:04:55.577 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:55.577 LIB libspdk_env_dpdk_rpc.a 00:04:55.577 CC module/accel/dsa/accel_dsa.o 00:04:55.577 SO libspdk_env_dpdk_rpc.so.6.0 00:04:55.577 SYMLINK libspdk_env_dpdk_rpc.so 00:04:55.577 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:55.577 CC module/keyring/file/keyring_rpc.o 00:04:55.836 CC module/accel/ioat/accel_ioat_rpc.o 00:04:55.836 CC module/accel/iaa/accel_iaa_rpc.o 00:04:55.836 CC module/accel/error/accel_error_rpc.o 00:04:55.836 LIB libspdk_scheduler_dynamic.a 00:04:55.836 SO libspdk_scheduler_dynamic.so.4.0 00:04:55.836 LIB libspdk_keyring_file.a 00:04:55.836 SO libspdk_keyring_file.so.2.0 00:04:55.836 LIB libspdk_blob_bdev.a 00:04:55.836 SYMLINK libspdk_scheduler_dynamic.so 00:04:55.837 LIB libspdk_accel_ioat.a 00:04:55.837 CC module/accel/dsa/accel_dsa_rpc.o 00:04:55.837 SO libspdk_blob_bdev.so.12.0 00:04:55.837 SO libspdk_accel_ioat.so.6.0 00:04:55.837 LIB libspdk_accel_iaa.a 00:04:55.837 LIB libspdk_accel_error.a 00:04:55.837 SYMLINK libspdk_keyring_file.so 00:04:55.837 SO libspdk_accel_iaa.so.3.0 00:04:55.837 SO libspdk_accel_error.so.2.0 00:04:55.837 SYMLINK libspdk_blob_bdev.so 00:04:55.837 SYMLINK libspdk_accel_ioat.so 00:04:56.096 CC module/fsdev/aio/linux_aio_mgr.o 00:04:56.096 SYMLINK libspdk_accel_iaa.so 00:04:56.096 SYMLINK libspdk_accel_error.so 00:04:56.096 LIB libspdk_accel_dsa.a 00:04:56.096 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:56.096 SO libspdk_accel_dsa.so.5.0 00:04:56.096 CC module/scheduler/gscheduler/gscheduler.o 00:04:56.096 CC module/keyring/linux/keyring.o 00:04:56.096 SYMLINK libspdk_accel_dsa.so 00:04:56.096 CC module/keyring/linux/keyring_rpc.o 00:04:56.096 LIB libspdk_scheduler_dpdk_governor.a 00:04:56.096 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:56.355 LIB libspdk_scheduler_gscheduler.a 00:04:56.355 SO libspdk_scheduler_gscheduler.so.4.0 00:04:56.355 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:56.355 CC module/bdev/delay/vbdev_delay.o 00:04:56.355 CC module/bdev/error/vbdev_error.o 00:04:56.355 CC module/blobfs/bdev/blobfs_bdev.o 00:04:56.355 CC module/bdev/gpt/gpt.o 00:04:56.355 LIB libspdk_fsdev_aio.a 00:04:56.356 LIB libspdk_keyring_linux.a 00:04:56.356 SYMLINK libspdk_scheduler_gscheduler.so 00:04:56.356 CC module/bdev/gpt/vbdev_gpt.o 00:04:56.356 SO libspdk_keyring_linux.so.1.0 00:04:56.356 SO libspdk_fsdev_aio.so.1.0 00:04:56.356 LIB libspdk_sock_posix.a 00:04:56.356 SYMLINK libspdk_keyring_linux.so 00:04:56.356 CC module/bdev/error/vbdev_error_rpc.o 00:04:56.356 SO libspdk_sock_posix.so.6.0 00:04:56.356 SYMLINK libspdk_fsdev_aio.so 00:04:56.356 CC module/bdev/lvol/vbdev_lvol.o 00:04:56.356 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:56.615 CC module/bdev/malloc/bdev_malloc.o 00:04:56.615 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.615 SYMLINK libspdk_sock_posix.so 00:04:56.615 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:56.615 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:56.615 LIB libspdk_bdev_error.a 00:04:56.615 LIB libspdk_bdev_gpt.a 00:04:56.615 SO libspdk_bdev_error.so.6.0 00:04:56.615 CC module/bdev/null/bdev_null.o 00:04:56.615 SO libspdk_bdev_gpt.so.6.0 00:04:56.615 CC module/bdev/null/bdev_null_rpc.o 00:04:56.615 LIB libspdk_blobfs_bdev.a 00:04:56.615 SO libspdk_blobfs_bdev.so.6.0 00:04:56.615 SYMLINK libspdk_bdev_gpt.so 00:04:56.615 LIB libspdk_bdev_delay.a 00:04:56.615 SYMLINK libspdk_bdev_error.so 00:04:56.874 SO libspdk_bdev_delay.so.6.0 00:04:56.874 SYMLINK libspdk_blobfs_bdev.so 00:04:56.874 SYMLINK libspdk_bdev_delay.so 00:04:56.874 LIB libspdk_bdev_malloc.a 00:04:56.874 CC module/bdev/passthru/vbdev_passthru.o 00:04:56.874 CC module/bdev/nvme/bdev_nvme.o 00:04:56.874 LIB libspdk_bdev_null.a 00:04:56.874 SO libspdk_bdev_malloc.so.6.0 00:04:56.874 CC module/bdev/raid/bdev_raid.o 00:04:56.874 CC module/bdev/raid/bdev_raid_rpc.o 00:04:56.875 SO libspdk_bdev_null.so.6.0 00:04:56.875 CC module/bdev/split/vbdev_split.o 00:04:57.134 SYMLINK libspdk_bdev_malloc.so 00:04:57.134 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:57.134 CC module/bdev/split/vbdev_split_rpc.o 00:04:57.134 LIB libspdk_bdev_lvol.a 00:04:57.134 CC module/bdev/xnvme/bdev_xnvme.o 00:04:57.134 SYMLINK libspdk_bdev_null.so 00:04:57.134 SO libspdk_bdev_lvol.so.6.0 00:04:57.134 CC module/bdev/raid/bdev_raid_sb.o 00:04:57.134 SYMLINK libspdk_bdev_lvol.so 00:04:57.134 CC module/bdev/raid/raid0.o 00:04:57.134 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:57.134 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:57.134 LIB libspdk_bdev_split.a 00:04:57.134 CC module/bdev/raid/raid1.o 00:04:57.134 SO libspdk_bdev_split.so.6.0 00:04:57.393 CC module/bdev/raid/concat.o 00:04:57.393 SYMLINK libspdk_bdev_split.so 00:04:57.393 LIB libspdk_bdev_passthru.a 00:04:57.393 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:57.393 LIB libspdk_bdev_xnvme.a 00:04:57.393 SO libspdk_bdev_passthru.so.6.0 00:04:57.393 SO libspdk_bdev_xnvme.so.3.0 00:04:57.393 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:57.393 SYMLINK libspdk_bdev_passthru.so 00:04:57.393 CC module/bdev/nvme/nvme_rpc.o 00:04:57.393 SYMLINK libspdk_bdev_xnvme.so 00:04:57.393 CC module/bdev/aio/bdev_aio.o 00:04:57.393 CC module/bdev/aio/bdev_aio_rpc.o 00:04:57.393 CC module/bdev/ftl/bdev_ftl.o 00:04:57.653 LIB libspdk_bdev_zone_block.a 00:04:57.653 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:57.653 SO libspdk_bdev_zone_block.so.6.0 00:04:57.653 SYMLINK libspdk_bdev_zone_block.so 00:04:57.653 CC module/bdev/iscsi/bdev_iscsi.o 00:04:57.653 CC module/bdev/nvme/bdev_mdns_client.o 00:04:57.653 CC module/bdev/nvme/vbdev_opal.o 00:04:57.653 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:57.913 LIB libspdk_bdev_ftl.a 00:04:57.913 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:57.913 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:57.913 SO libspdk_bdev_ftl.so.6.0 00:04:57.913 LIB libspdk_bdev_aio.a 00:04:57.913 SO libspdk_bdev_aio.so.6.0 00:04:57.913 SYMLINK libspdk_bdev_ftl.so 00:04:57.913 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:57.913 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:57.913 SYMLINK libspdk_bdev_aio.so 00:04:57.913 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:57.913 LIB libspdk_bdev_raid.a 00:04:58.172 SO libspdk_bdev_raid.so.6.0 00:04:58.172 LIB libspdk_bdev_iscsi.a 00:04:58.172 SO libspdk_bdev_iscsi.so.6.0 00:04:58.172 SYMLINK libspdk_bdev_raid.so 00:04:58.172 SYMLINK libspdk_bdev_iscsi.so 00:04:58.431 LIB libspdk_bdev_virtio.a 00:04:58.431 SO libspdk_bdev_virtio.so.6.0 00:04:58.690 SYMLINK libspdk_bdev_virtio.so 00:04:59.656 LIB libspdk_bdev_nvme.a 00:04:59.926 SO libspdk_bdev_nvme.so.7.1 00:04:59.926 SYMLINK libspdk_bdev_nvme.so 00:05:00.864 CC module/event/subsystems/keyring/keyring.o 00:05:00.864 CC module/event/subsystems/iobuf/iobuf.o 00:05:00.864 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:00.864 CC module/event/subsystems/scheduler/scheduler.o 00:05:00.864 CC module/event/subsystems/sock/sock.o 00:05:00.864 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:00.864 CC module/event/subsystems/fsdev/fsdev.o 00:05:00.864 CC module/event/subsystems/vmd/vmd.o 00:05:00.864 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:00.864 LIB libspdk_event_keyring.a 00:05:00.864 LIB libspdk_event_vhost_blk.a 00:05:00.864 LIB libspdk_event_sock.a 00:05:00.864 LIB libspdk_event_scheduler.a 00:05:00.864 LIB libspdk_event_fsdev.a 00:05:00.864 LIB libspdk_event_iobuf.a 00:05:00.864 LIB libspdk_event_vmd.a 00:05:00.864 SO libspdk_event_keyring.so.1.0 00:05:00.864 SO libspdk_event_vhost_blk.so.3.0 00:05:00.864 SO libspdk_event_sock.so.5.0 00:05:00.864 SO libspdk_event_scheduler.so.4.0 00:05:00.864 SO libspdk_event_fsdev.so.1.0 00:05:00.864 SO libspdk_event_iobuf.so.3.0 00:05:00.864 SO libspdk_event_vmd.so.6.0 00:05:00.864 SYMLINK libspdk_event_keyring.so 00:05:00.864 SYMLINK libspdk_event_vhost_blk.so 00:05:00.864 SYMLINK libspdk_event_sock.so 00:05:00.864 SYMLINK libspdk_event_scheduler.so 00:05:00.864 SYMLINK libspdk_event_fsdev.so 00:05:00.864 SYMLINK libspdk_event_iobuf.so 00:05:00.864 SYMLINK libspdk_event_vmd.so 00:05:01.433 CC module/event/subsystems/accel/accel.o 00:05:01.433 LIB libspdk_event_accel.a 00:05:01.692 SO libspdk_event_accel.so.6.0 00:05:01.692 SYMLINK libspdk_event_accel.so 00:05:02.260 CC module/event/subsystems/bdev/bdev.o 00:05:02.260 LIB libspdk_event_bdev.a 00:05:02.260 SO libspdk_event_bdev.so.6.0 00:05:02.260 SYMLINK libspdk_event_bdev.so 00:05:02.829 CC module/event/subsystems/scsi/scsi.o 00:05:02.829 CC module/event/subsystems/ublk/ublk.o 00:05:02.829 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:02.829 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:02.829 CC module/event/subsystems/nbd/nbd.o 00:05:02.829 LIB libspdk_event_ublk.a 00:05:02.829 LIB libspdk_event_scsi.a 00:05:02.829 SO libspdk_event_ublk.so.3.0 00:05:02.829 LIB libspdk_event_nbd.a 00:05:03.088 SO libspdk_event_scsi.so.6.0 00:05:03.088 LIB libspdk_event_nvmf.a 00:05:03.088 SO libspdk_event_nbd.so.6.0 00:05:03.088 SYMLINK libspdk_event_ublk.so 00:05:03.088 SYMLINK libspdk_event_scsi.so 00:05:03.088 SO libspdk_event_nvmf.so.6.0 00:05:03.088 SYMLINK libspdk_event_nbd.so 00:05:03.088 SYMLINK libspdk_event_nvmf.so 00:05:03.347 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:03.347 CC module/event/subsystems/iscsi/iscsi.o 00:05:03.607 LIB libspdk_event_vhost_scsi.a 00:05:03.607 LIB libspdk_event_iscsi.a 00:05:03.607 SO libspdk_event_vhost_scsi.so.3.0 00:05:03.607 SO libspdk_event_iscsi.so.6.0 00:05:03.607 SYMLINK libspdk_event_vhost_scsi.so 00:05:03.866 SYMLINK libspdk_event_iscsi.so 00:05:03.866 SO libspdk.so.6.0 00:05:03.866 SYMLINK libspdk.so 00:05:04.436 TEST_HEADER include/spdk/accel.h 00:05:04.436 CXX app/trace/trace.o 00:05:04.436 TEST_HEADER include/spdk/accel_module.h 00:05:04.436 TEST_HEADER include/spdk/assert.h 00:05:04.436 TEST_HEADER include/spdk/barrier.h 00:05:04.436 CC test/rpc_client/rpc_client_test.o 00:05:04.436 TEST_HEADER include/spdk/base64.h 00:05:04.436 TEST_HEADER include/spdk/bdev.h 00:05:04.436 TEST_HEADER include/spdk/bdev_module.h 00:05:04.436 TEST_HEADER include/spdk/bdev_zone.h 00:05:04.436 TEST_HEADER include/spdk/bit_array.h 00:05:04.436 TEST_HEADER include/spdk/bit_pool.h 00:05:04.436 TEST_HEADER include/spdk/blob_bdev.h 00:05:04.436 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:04.436 TEST_HEADER include/spdk/blobfs.h 00:05:04.436 TEST_HEADER include/spdk/blob.h 00:05:04.436 TEST_HEADER include/spdk/conf.h 00:05:04.436 TEST_HEADER include/spdk/config.h 00:05:04.436 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:04.436 TEST_HEADER include/spdk/cpuset.h 00:05:04.436 TEST_HEADER include/spdk/crc16.h 00:05:04.436 TEST_HEADER include/spdk/crc32.h 00:05:04.436 TEST_HEADER include/spdk/crc64.h 00:05:04.436 TEST_HEADER include/spdk/dif.h 00:05:04.436 TEST_HEADER include/spdk/dma.h 00:05:04.436 TEST_HEADER include/spdk/endian.h 00:05:04.436 TEST_HEADER include/spdk/env_dpdk.h 00:05:04.436 TEST_HEADER include/spdk/env.h 00:05:04.436 TEST_HEADER include/spdk/event.h 00:05:04.436 TEST_HEADER include/spdk/fd_group.h 00:05:04.436 TEST_HEADER include/spdk/fd.h 00:05:04.436 TEST_HEADER include/spdk/file.h 00:05:04.436 TEST_HEADER include/spdk/fsdev.h 00:05:04.436 TEST_HEADER include/spdk/fsdev_module.h 00:05:04.436 TEST_HEADER include/spdk/ftl.h 00:05:04.436 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:04.436 TEST_HEADER include/spdk/gpt_spec.h 00:05:04.436 TEST_HEADER include/spdk/hexlify.h 00:05:04.436 TEST_HEADER include/spdk/histogram_data.h 00:05:04.436 TEST_HEADER include/spdk/idxd.h 00:05:04.436 TEST_HEADER include/spdk/idxd_spec.h 00:05:04.436 CC examples/ioat/perf/perf.o 00:05:04.436 CC examples/util/zipf/zipf.o 00:05:04.436 TEST_HEADER include/spdk/init.h 00:05:04.436 TEST_HEADER include/spdk/ioat.h 00:05:04.436 TEST_HEADER include/spdk/ioat_spec.h 00:05:04.436 TEST_HEADER include/spdk/iscsi_spec.h 00:05:04.436 TEST_HEADER include/spdk/json.h 00:05:04.436 TEST_HEADER include/spdk/jsonrpc.h 00:05:04.436 CC test/thread/poller_perf/poller_perf.o 00:05:04.436 TEST_HEADER include/spdk/keyring.h 00:05:04.436 TEST_HEADER include/spdk/keyring_module.h 00:05:04.436 TEST_HEADER include/spdk/likely.h 00:05:04.436 TEST_HEADER include/spdk/log.h 00:05:04.436 TEST_HEADER include/spdk/lvol.h 00:05:04.436 TEST_HEADER include/spdk/md5.h 00:05:04.436 TEST_HEADER include/spdk/memory.h 00:05:04.436 TEST_HEADER include/spdk/mmio.h 00:05:04.436 CC test/dma/test_dma/test_dma.o 00:05:04.436 TEST_HEADER include/spdk/nbd.h 00:05:04.436 TEST_HEADER include/spdk/net.h 00:05:04.436 TEST_HEADER include/spdk/notify.h 00:05:04.436 TEST_HEADER include/spdk/nvme.h 00:05:04.436 TEST_HEADER include/spdk/nvme_intel.h 00:05:04.436 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:04.436 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:04.436 TEST_HEADER include/spdk/nvme_spec.h 00:05:04.436 TEST_HEADER include/spdk/nvme_zns.h 00:05:04.436 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:04.436 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:04.436 TEST_HEADER include/spdk/nvmf.h 00:05:04.436 TEST_HEADER include/spdk/nvmf_spec.h 00:05:04.436 CC test/app/bdev_svc/bdev_svc.o 00:05:04.436 TEST_HEADER include/spdk/nvmf_transport.h 00:05:04.436 TEST_HEADER include/spdk/opal.h 00:05:04.436 TEST_HEADER include/spdk/opal_spec.h 00:05:04.436 TEST_HEADER include/spdk/pci_ids.h 00:05:04.436 TEST_HEADER include/spdk/pipe.h 00:05:04.436 TEST_HEADER include/spdk/queue.h 00:05:04.436 TEST_HEADER include/spdk/reduce.h 00:05:04.436 TEST_HEADER include/spdk/rpc.h 00:05:04.436 TEST_HEADER include/spdk/scheduler.h 00:05:04.436 TEST_HEADER include/spdk/scsi.h 00:05:04.436 TEST_HEADER include/spdk/scsi_spec.h 00:05:04.436 TEST_HEADER include/spdk/sock.h 00:05:04.436 TEST_HEADER include/spdk/stdinc.h 00:05:04.436 TEST_HEADER include/spdk/string.h 00:05:04.436 TEST_HEADER include/spdk/thread.h 00:05:04.436 TEST_HEADER include/spdk/trace.h 00:05:04.436 TEST_HEADER include/spdk/trace_parser.h 00:05:04.436 TEST_HEADER include/spdk/tree.h 00:05:04.436 TEST_HEADER include/spdk/ublk.h 00:05:04.436 TEST_HEADER include/spdk/util.h 00:05:04.436 TEST_HEADER include/spdk/uuid.h 00:05:04.436 CC test/env/mem_callbacks/mem_callbacks.o 00:05:04.436 TEST_HEADER include/spdk/version.h 00:05:04.436 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:04.436 LINK rpc_client_test 00:05:04.436 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:04.436 TEST_HEADER include/spdk/vhost.h 00:05:04.436 TEST_HEADER include/spdk/vmd.h 00:05:04.436 TEST_HEADER include/spdk/xor.h 00:05:04.436 TEST_HEADER include/spdk/zipf.h 00:05:04.436 CXX test/cpp_headers/accel.o 00:05:04.696 LINK interrupt_tgt 00:05:04.696 LINK zipf 00:05:04.696 LINK poller_perf 00:05:04.696 LINK ioat_perf 00:05:04.696 LINK bdev_svc 00:05:04.696 LINK spdk_trace 00:05:04.696 CXX test/cpp_headers/accel_module.o 00:05:04.696 CC app/trace_record/trace_record.o 00:05:04.955 CC app/nvmf_tgt/nvmf_main.o 00:05:04.955 CC app/iscsi_tgt/iscsi_tgt.o 00:05:04.955 CXX test/cpp_headers/assert.o 00:05:04.955 CC examples/ioat/verify/verify.o 00:05:04.955 CC app/spdk_tgt/spdk_tgt.o 00:05:04.955 LINK test_dma 00:05:04.955 CC app/spdk_lspci/spdk_lspci.o 00:05:04.955 LINK nvmf_tgt 00:05:04.955 CXX test/cpp_headers/barrier.o 00:05:04.955 LINK mem_callbacks 00:05:04.955 LINK spdk_trace_record 00:05:04.955 LINK iscsi_tgt 00:05:05.214 LINK verify 00:05:05.214 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:05.214 LINK spdk_tgt 00:05:05.214 LINK spdk_lspci 00:05:05.214 CXX test/cpp_headers/base64.o 00:05:05.214 CC test/env/vtophys/vtophys.o 00:05:05.214 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:05.214 CC test/env/memory/memory_ut.o 00:05:05.474 CXX test/cpp_headers/bdev.o 00:05:05.474 LINK vtophys 00:05:05.474 CC examples/sock/hello_world/hello_sock.o 00:05:05.474 CC examples/thread/thread/thread_ex.o 00:05:05.474 LINK env_dpdk_post_init 00:05:05.474 CC examples/vmd/lsvmd/lsvmd.o 00:05:05.474 CC app/spdk_nvme_perf/perf.o 00:05:05.474 CC examples/idxd/perf/perf.o 00:05:05.474 LINK nvme_fuzz 00:05:05.733 CXX test/cpp_headers/bdev_module.o 00:05:05.733 LINK lsvmd 00:05:05.733 CC app/spdk_nvme_identify/identify.o 00:05:05.733 LINK hello_sock 00:05:05.733 LINK thread 00:05:05.733 CC app/spdk_nvme_discover/discovery_aer.o 00:05:05.733 CXX test/cpp_headers/bdev_zone.o 00:05:05.992 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:05.992 CC examples/vmd/led/led.o 00:05:05.992 LINK idxd_perf 00:05:05.992 LINK spdk_nvme_discover 00:05:05.992 CXX test/cpp_headers/bit_array.o 00:05:05.992 LINK led 00:05:05.992 CC app/spdk_top/spdk_top.o 00:05:05.992 CC test/app/histogram_perf/histogram_perf.o 00:05:06.251 CXX test/cpp_headers/bit_pool.o 00:05:06.251 CC test/app/jsoncat/jsoncat.o 00:05:06.251 LINK histogram_perf 00:05:06.251 CC test/app/stub/stub.o 00:05:06.251 CXX test/cpp_headers/blob_bdev.o 00:05:06.251 LINK jsoncat 00:05:06.510 CC examples/nvme/hello_world/hello_world.o 00:05:06.510 LINK memory_ut 00:05:06.510 LINK spdk_nvme_perf 00:05:06.510 LINK stub 00:05:06.510 CC examples/nvme/reconnect/reconnect.o 00:05:06.510 CXX test/cpp_headers/blobfs_bdev.o 00:05:06.510 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:06.769 LINK hello_world 00:05:06.769 CC test/env/pci/pci_ut.o 00:05:06.769 LINK spdk_nvme_identify 00:05:06.769 CXX test/cpp_headers/blobfs.o 00:05:06.769 CC examples/nvme/arbitration/arbitration.o 00:05:06.769 CC examples/nvme/hotplug/hotplug.o 00:05:06.769 LINK reconnect 00:05:06.769 CXX test/cpp_headers/blob.o 00:05:07.029 CXX test/cpp_headers/conf.o 00:05:07.029 LINK spdk_top 00:05:07.029 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:07.029 LINK hotplug 00:05:07.029 CC examples/nvme/abort/abort.o 00:05:07.029 CXX test/cpp_headers/config.o 00:05:07.029 LINK arbitration 00:05:07.029 CXX test/cpp_headers/cpuset.o 00:05:07.029 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:07.288 LINK nvme_manage 00:05:07.288 LINK pci_ut 00:05:07.288 CXX test/cpp_headers/crc16.o 00:05:07.288 LINK cmb_copy 00:05:07.288 CC app/vhost/vhost.o 00:05:07.288 LINK pmr_persistence 00:05:07.288 CXX test/cpp_headers/crc32.o 00:05:07.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:07.288 CXX test/cpp_headers/crc64.o 00:05:07.547 LINK abort 00:05:07.547 CC app/spdk_dd/spdk_dd.o 00:05:07.547 LINK vhost 00:05:07.547 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:07.547 CXX test/cpp_headers/dif.o 00:05:07.547 CC app/fio/nvme/fio_plugin.o 00:05:07.547 CC examples/accel/perf/accel_perf.o 00:05:07.805 CC app/fio/bdev/fio_plugin.o 00:05:07.805 CXX test/cpp_headers/dma.o 00:05:07.805 LINK hello_fsdev 00:05:07.806 LINK iscsi_fuzz 00:05:07.806 CC test/event/event_perf/event_perf.o 00:05:07.806 LINK spdk_dd 00:05:07.806 CXX test/cpp_headers/endian.o 00:05:07.806 CC examples/blob/hello_world/hello_blob.o 00:05:08.065 LINK event_perf 00:05:08.065 LINK vhost_fuzz 00:05:08.065 CXX test/cpp_headers/env_dpdk.o 00:05:08.065 LINK hello_blob 00:05:08.065 CC test/event/reactor/reactor.o 00:05:08.065 LINK accel_perf 00:05:08.065 CC test/nvme/aer/aer.o 00:05:08.324 CXX test/cpp_headers/env.o 00:05:08.324 LINK spdk_bdev 00:05:08.324 CC test/nvme/reset/reset.o 00:05:08.324 CC test/accel/dif/dif.o 00:05:08.324 LINK spdk_nvme 00:05:08.324 CC test/blobfs/mkfs/mkfs.o 00:05:08.324 LINK reactor 00:05:08.324 CXX test/cpp_headers/event.o 00:05:08.324 CC test/nvme/sgl/sgl.o 00:05:08.324 CC test/nvme/e2edp/nvme_dp.o 00:05:08.583 LINK mkfs 00:05:08.583 CC examples/blob/cli/blobcli.o 00:05:08.583 LINK reset 00:05:08.583 LINK aer 00:05:08.583 CC test/event/reactor_perf/reactor_perf.o 00:05:08.583 CXX test/cpp_headers/fd_group.o 00:05:08.583 CC test/lvol/esnap/esnap.o 00:05:08.583 CXX test/cpp_headers/fd.o 00:05:08.583 LINK reactor_perf 00:05:08.842 LINK sgl 00:05:08.842 LINK nvme_dp 00:05:08.842 CC test/event/app_repeat/app_repeat.o 00:05:08.842 CXX test/cpp_headers/file.o 00:05:08.842 CC test/event/scheduler/scheduler.o 00:05:08.842 LINK app_repeat 00:05:08.842 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.842 LINK blobcli 00:05:09.101 LINK dif 00:05:09.101 CXX test/cpp_headers/fsdev.o 00:05:09.101 CC examples/bdev/bdevperf/bdevperf.o 00:05:09.101 CC test/nvme/overhead/overhead.o 00:05:09.101 CC test/nvme/err_injection/err_injection.o 00:05:09.101 LINK scheduler 00:05:09.101 CXX test/cpp_headers/fsdev_module.o 00:05:09.101 CC test/nvme/startup/startup.o 00:05:09.101 LINK hello_bdev 00:05:09.101 LINK err_injection 00:05:09.360 CC test/nvme/reserve/reserve.o 00:05:09.360 LINK overhead 00:05:09.360 CXX test/cpp_headers/ftl.o 00:05:09.360 CC test/nvme/simple_copy/simple_copy.o 00:05:09.360 LINK startup 00:05:09.360 CC test/bdev/bdevio/bdevio.o 00:05:09.360 CXX test/cpp_headers/fuse_dispatcher.o 00:05:09.619 CC test/nvme/connect_stress/connect_stress.o 00:05:09.619 LINK reserve 00:05:09.619 CXX test/cpp_headers/gpt_spec.o 00:05:09.619 CC test/nvme/boot_partition/boot_partition.o 00:05:09.619 LINK simple_copy 00:05:09.619 CXX test/cpp_headers/hexlify.o 00:05:09.619 CC test/nvme/compliance/nvme_compliance.o 00:05:09.619 CXX test/cpp_headers/histogram_data.o 00:05:09.619 LINK boot_partition 00:05:09.619 LINK connect_stress 00:05:09.879 CC test/nvme/fused_ordering/fused_ordering.o 00:05:09.879 CXX test/cpp_headers/idxd.o 00:05:09.879 LINK bdevio 00:05:09.879 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:09.879 CXX test/cpp_headers/idxd_spec.o 00:05:09.879 CXX test/cpp_headers/init.o 00:05:09.879 LINK bdevperf 00:05:09.879 CC test/nvme/fdp/fdp.o 00:05:09.879 LINK fused_ordering 00:05:09.879 CXX test/cpp_headers/ioat.o 00:05:09.879 LINK nvme_compliance 00:05:10.178 LINK doorbell_aers 00:05:10.178 CXX test/cpp_headers/ioat_spec.o 00:05:10.178 CXX test/cpp_headers/iscsi_spec.o 00:05:10.178 CC test/nvme/cuse/cuse.o 00:05:10.178 CXX test/cpp_headers/json.o 00:05:10.178 CXX test/cpp_headers/jsonrpc.o 00:05:10.178 CXX test/cpp_headers/keyring.o 00:05:10.178 CXX test/cpp_headers/keyring_module.o 00:05:10.178 CXX test/cpp_headers/likely.o 00:05:10.178 CXX test/cpp_headers/log.o 00:05:10.178 CXX test/cpp_headers/lvol.o 00:05:10.448 CXX test/cpp_headers/md5.o 00:05:10.448 LINK fdp 00:05:10.448 CXX test/cpp_headers/memory.o 00:05:10.448 CXX test/cpp_headers/mmio.o 00:05:10.448 CXX test/cpp_headers/nbd.o 00:05:10.448 CXX test/cpp_headers/net.o 00:05:10.448 CXX test/cpp_headers/notify.o 00:05:10.448 CC examples/nvmf/nvmf/nvmf.o 00:05:10.448 CXX test/cpp_headers/nvme.o 00:05:10.448 CXX test/cpp_headers/nvme_intel.o 00:05:10.448 CXX test/cpp_headers/nvme_ocssd.o 00:05:10.448 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:10.448 CXX test/cpp_headers/nvme_spec.o 00:05:10.448 CXX test/cpp_headers/nvme_zns.o 00:05:10.448 CXX test/cpp_headers/nvmf_cmd.o 00:05:10.708 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:10.708 CXX test/cpp_headers/nvmf.o 00:05:10.708 CXX test/cpp_headers/nvmf_spec.o 00:05:10.708 CXX test/cpp_headers/nvmf_transport.o 00:05:10.708 CXX test/cpp_headers/opal.o 00:05:10.708 CXX test/cpp_headers/opal_spec.o 00:05:10.708 LINK nvmf 00:05:10.708 CXX test/cpp_headers/pci_ids.o 00:05:10.708 CXX test/cpp_headers/pipe.o 00:05:10.708 CXX test/cpp_headers/queue.o 00:05:10.967 CXX test/cpp_headers/reduce.o 00:05:10.967 CXX test/cpp_headers/rpc.o 00:05:10.967 CXX test/cpp_headers/scheduler.o 00:05:10.967 CXX test/cpp_headers/scsi.o 00:05:10.967 CXX test/cpp_headers/scsi_spec.o 00:05:10.967 CXX test/cpp_headers/sock.o 00:05:10.967 CXX test/cpp_headers/stdinc.o 00:05:10.967 CXX test/cpp_headers/string.o 00:05:10.967 CXX test/cpp_headers/thread.o 00:05:10.967 CXX test/cpp_headers/trace.o 00:05:10.967 CXX test/cpp_headers/trace_parser.o 00:05:10.967 CXX test/cpp_headers/tree.o 00:05:10.967 CXX test/cpp_headers/ublk.o 00:05:10.967 CXX test/cpp_headers/util.o 00:05:10.967 CXX test/cpp_headers/uuid.o 00:05:10.967 CXX test/cpp_headers/version.o 00:05:11.225 CXX test/cpp_headers/vfio_user_pci.o 00:05:11.225 CXX test/cpp_headers/vfio_user_spec.o 00:05:11.225 CXX test/cpp_headers/vhost.o 00:05:11.225 CXX test/cpp_headers/vmd.o 00:05:11.225 CXX test/cpp_headers/xor.o 00:05:11.225 CXX test/cpp_headers/zipf.o 00:05:11.225 LINK cuse 00:05:15.418 LINK esnap 00:05:15.418 00:05:15.418 real 1m26.307s 00:05:15.418 user 7m10.814s 00:05:15.418 sys 1m57.944s 00:05:15.418 14:10:39 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:15.418 14:10:39 make -- common/autotest_common.sh@10 -- $ set +x 00:05:15.418 ************************************ 00:05:15.418 END TEST make 00:05:15.418 ************************************ 00:05:15.418 14:10:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:15.418 14:10:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:15.418 14:10:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:15.418 14:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.418 14:10:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:15.418 14:10:39 -- pm/common@44 -- $ pid=5283 00:05:15.418 14:10:39 -- pm/common@50 -- $ kill -TERM 5283 00:05:15.418 14:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.418 14:10:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:15.418 14:10:39 -- pm/common@44 -- $ pid=5285 00:05:15.418 14:10:39 -- pm/common@50 -- $ kill -TERM 5285 00:05:15.418 14:10:39 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:15.418 14:10:39 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:15.418 14:10:40 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.418 14:10:40 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.418 14:10:40 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.418 14:10:40 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.418 14:10:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.418 14:10:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.418 14:10:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.418 14:10:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.418 14:10:40 -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.418 14:10:40 -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.418 14:10:40 -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.418 14:10:40 -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.418 14:10:40 -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.418 14:10:40 -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.418 14:10:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.418 14:10:40 -- scripts/common.sh@344 -- # case "$op" in 00:05:15.418 14:10:40 -- scripts/common.sh@345 -- # : 1 00:05:15.418 14:10:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.418 14:10:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.418 14:10:40 -- scripts/common.sh@365 -- # decimal 1 00:05:15.418 14:10:40 -- scripts/common.sh@353 -- # local d=1 00:05:15.418 14:10:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.418 14:10:40 -- scripts/common.sh@355 -- # echo 1 00:05:15.418 14:10:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.418 14:10:40 -- scripts/common.sh@366 -- # decimal 2 00:05:15.418 14:10:40 -- scripts/common.sh@353 -- # local d=2 00:05:15.418 14:10:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.418 14:10:40 -- scripts/common.sh@355 -- # echo 2 00:05:15.418 14:10:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.418 14:10:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.418 14:10:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.418 14:10:40 -- scripts/common.sh@368 -- # return 0 00:05:15.418 14:10:40 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.418 14:10:40 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.418 --rc genhtml_branch_coverage=1 00:05:15.418 --rc genhtml_function_coverage=1 00:05:15.418 --rc genhtml_legend=1 00:05:15.418 --rc geninfo_all_blocks=1 00:05:15.418 --rc geninfo_unexecuted_blocks=1 00:05:15.418 00:05:15.418 ' 00:05:15.418 14:10:40 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.418 --rc genhtml_branch_coverage=1 00:05:15.418 --rc genhtml_function_coverage=1 00:05:15.418 --rc genhtml_legend=1 00:05:15.418 --rc geninfo_all_blocks=1 00:05:15.418 --rc geninfo_unexecuted_blocks=1 00:05:15.418 00:05:15.418 ' 00:05:15.418 14:10:40 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.418 --rc genhtml_branch_coverage=1 00:05:15.418 --rc genhtml_function_coverage=1 00:05:15.418 --rc genhtml_legend=1 00:05:15.418 --rc geninfo_all_blocks=1 00:05:15.418 --rc geninfo_unexecuted_blocks=1 00:05:15.418 00:05:15.418 ' 00:05:15.419 14:10:40 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.419 --rc genhtml_branch_coverage=1 00:05:15.419 --rc genhtml_function_coverage=1 00:05:15.419 --rc genhtml_legend=1 00:05:15.419 --rc geninfo_all_blocks=1 00:05:15.419 --rc geninfo_unexecuted_blocks=1 00:05:15.419 00:05:15.419 ' 00:05:15.419 14:10:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.419 14:10:40 -- nvmf/common.sh@7 -- # uname -s 00:05:15.419 14:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.419 14:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.419 14:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.419 14:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.419 14:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.419 14:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.419 14:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.419 14:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.419 14:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.419 14:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.419 14:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0170221-08c0-40d7-bc6c-09be5c3f45af 00:05:15.419 14:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0170221-08c0-40d7-bc6c-09be5c3f45af 00:05:15.419 14:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.419 14:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.419 14:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.419 14:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.419 14:10:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:15.419 14:10:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.419 14:10:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.419 14:10:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.419 14:10:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.419 14:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.419 14:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.419 14:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.419 14:10:40 -- paths/export.sh@5 -- # export PATH 00:05:15.419 14:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.419 14:10:40 -- nvmf/common.sh@51 -- # : 0 00:05:15.419 14:10:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.419 14:10:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.419 14:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.419 14:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.419 14:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.419 14:10:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.419 14:10:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.419 14:10:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.419 14:10:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.419 14:10:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:15.678 14:10:40 -- spdk/autotest.sh@32 -- # uname -s 00:05:15.678 14:10:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:15.678 14:10:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:15.678 14:10:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:15.678 14:10:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:15.678 14:10:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:15.678 14:10:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:15.678 14:10:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:15.678 14:10:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:15.678 14:10:40 -- spdk/autotest.sh@48 -- # udevadm_pid=55998 00:05:15.678 14:10:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:15.678 14:10:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:15.678 14:10:40 -- pm/common@17 -- # local monitor 00:05:15.678 14:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.678 14:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.678 14:10:40 -- pm/common@21 -- # date +%s 00:05:15.678 14:10:40 -- pm/common@21 -- # date +%s 00:05:15.678 14:10:40 -- pm/common@25 -- # sleep 1 00:05:15.678 14:10:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733839840 00:05:15.678 14:10:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733839840 00:05:15.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733839840_collect-vmstat.pm.log 00:05:15.678 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733839840_collect-cpu-load.pm.log 00:05:16.616 14:10:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:16.616 14:10:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:16.616 14:10:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.616 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.616 14:10:41 -- spdk/autotest.sh@59 -- # create_test_list 00:05:16.616 14:10:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:16.616 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.616 14:10:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:16.616 14:10:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:16.616 14:10:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:16.616 14:10:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:16.616 14:10:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:16.616 14:10:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:16.616 14:10:41 -- common/autotest_common.sh@1457 -- # uname 00:05:16.616 14:10:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:16.616 14:10:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:16.616 14:10:41 -- common/autotest_common.sh@1477 -- # uname 00:05:16.616 14:10:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:16.616 14:10:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:16.616 14:10:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:16.875 lcov: LCOV version 1.15 00:05:16.875 14:10:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:31.762 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:31.763 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:49.869 14:11:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:49.869 14:11:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.869 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.869 14:11:11 -- spdk/autotest.sh@78 -- # rm -f 00:05:49.869 14:11:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.869 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:49.869 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:49.869 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:49.869 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:49.869 14:11:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:49.869 14:11:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:49.869 14:11:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:49.869 14:11:13 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:49.869 14:11:13 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:49.869 14:11:13 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:49.869 14:11:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:49.869 14:11:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:49.869 14:11:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:49.869 14:11:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:49.869 14:11:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.869 14:11:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:49.869 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.869 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.869 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:49.869 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:49.869 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:49.869 No valid GPT data, bailing 00:05:49.869 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:49.869 14:11:13 -- scripts/common.sh@394 -- # pt= 00:05:49.869 14:11:13 -- scripts/common.sh@395 -- # return 1 00:05:49.869 14:11:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:49.869 1+0 records in 00:05:49.869 1+0 records out 00:05:49.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018429 s, 56.9 MB/s 00:05:49.869 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.869 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.869 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:49.869 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:49.869 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:49.869 No valid GPT data, bailing 00:05:49.869 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:49.869 14:11:13 -- scripts/common.sh@394 -- # pt= 00:05:49.870 14:11:13 -- scripts/common.sh@395 -- # return 1 00:05:49.870 14:11:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:49.870 1+0 records in 00:05:49.870 1+0 records out 00:05:49.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443313 s, 237 MB/s 00:05:49.870 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.870 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.870 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:49.870 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:49.870 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:49.870 No valid GPT data, bailing 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # pt= 00:05:49.870 14:11:13 -- scripts/common.sh@395 -- # return 1 00:05:49.870 14:11:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:49.870 1+0 records in 00:05:49.870 1+0 records out 00:05:49.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620116 s, 169 MB/s 00:05:49.870 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.870 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.870 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:49.870 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:49.870 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:49.870 No valid GPT data, bailing 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # pt= 00:05:49.870 14:11:13 -- scripts/common.sh@395 -- # return 1 00:05:49.870 14:11:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:49.870 1+0 records in 00:05:49.870 1+0 records out 00:05:49.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621863 s, 169 MB/s 00:05:49.870 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.870 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.870 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:49.870 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:49.870 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:49.870 No valid GPT data, bailing 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # pt= 00:05:49.870 14:11:13 -- scripts/common.sh@395 -- # return 1 00:05:49.870 14:11:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:49.870 1+0 records in 00:05:49.870 1+0 records out 00:05:49.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560916 s, 187 MB/s 00:05:49.870 14:11:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.870 14:11:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.870 14:11:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:49.870 14:11:13 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:49.870 14:11:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:49.870 No valid GPT data, bailing 00:05:49.870 14:11:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:49.870 14:11:14 -- scripts/common.sh@394 -- # pt= 00:05:49.870 14:11:14 -- scripts/common.sh@395 -- # return 1 00:05:49.870 14:11:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:49.870 1+0 records in 00:05:49.870 1+0 records out 00:05:49.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648687 s, 162 MB/s 00:05:49.870 14:11:14 -- spdk/autotest.sh@105 -- # sync 00:05:49.870 14:11:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:49.870 14:11:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:49.870 14:11:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:52.413 14:11:16 -- spdk/autotest.sh@111 -- # uname -s 00:05:52.413 14:11:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:52.413 14:11:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:52.413 14:11:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:52.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.550 Hugepages 00:05:53.550 node hugesize free / total 00:05:53.550 node0 1048576kB 0 / 0 00:05:53.550 node0 2048kB 0 / 0 00:05:53.550 00:05:53.550 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:53.810 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:53.810 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:54.073 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:54.073 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:54.073 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:54.332 14:11:18 -- spdk/autotest.sh@117 -- # uname -s 00:05:54.332 14:11:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:54.332 14:11:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:54.332 14:11:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:54.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.853 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.853 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.853 14:11:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:57.234 14:11:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:57.234 14:11:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:57.234 14:11:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:57.234 14:11:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:57.234 14:11:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:57.234 14:11:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:57.234 14:11:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.234 14:11:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.234 14:11:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:57.234 14:11:21 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:57.234 14:11:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:57.234 14:11:21 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.063 Waiting for block devices as requested 00:05:58.063 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.063 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.322 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.322 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.595 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:03.595 14:11:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.595 14:11:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:03.595 14:11:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.595 14:11:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.595 14:11:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1543 -- # continue 00:06:03.595 14:11:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.595 14:11:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.595 14:11:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:03.595 14:11:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.595 14:11:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.595 14:11:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.595 14:11:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.595 14:11:28 -- common/autotest_common.sh@1543 -- # continue 00:06:03.595 14:11:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.595 14:11:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:03.595 14:11:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:03.596 14:11:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.596 14:11:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.596 14:11:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1543 -- # continue 00:06:03.596 14:11:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:03.596 14:11:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:03.596 14:11:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:03.596 14:11:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:03.596 14:11:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:03.596 14:11:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:03.596 14:11:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:03.596 14:11:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:03.596 14:11:28 -- common/autotest_common.sh@1543 -- # continue 00:06:03.596 14:11:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:03.596 14:11:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.596 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:06:03.855 14:11:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:03.855 14:11:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.855 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:06:03.855 14:11:28 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.364 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.364 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.364 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.364 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.364 14:11:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:05.364 14:11:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.364 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:05.624 14:11:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:05.624 14:11:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:05.624 14:11:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:05.624 14:11:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:05.624 14:11:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:05.624 14:11:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:05.624 14:11:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:05.624 14:11:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:05.624 14:11:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.624 14:11:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.624 14:11:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.624 14:11:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.624 14:11:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.624 14:11:30 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:05.624 14:11:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:05.624 14:11:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:05.624 14:11:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.624 14:11:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:05.624 14:11:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.624 14:11:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:05.624 14:11:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.624 14:11:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:05.624 14:11:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:05.624 14:11:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.624 14:11:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:05.624 14:11:30 -- common/autotest_common.sh@1572 -- # return 0 00:06:05.624 14:11:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:05.624 14:11:30 -- common/autotest_common.sh@1580 -- # return 0 00:06:05.624 14:11:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:05.624 14:11:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:05.624 14:11:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.624 14:11:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.624 14:11:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:05.624 14:11:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.624 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:05.624 14:11:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:05.624 14:11:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.624 14:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.624 14:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.624 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:05.624 ************************************ 00:06:05.624 START TEST env 00:06:05.624 ************************************ 00:06:05.624 14:11:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.884 * Looking for test storage... 00:06:05.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.884 14:11:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.884 14:11:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.884 14:11:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.884 14:11:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.884 14:11:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.884 14:11:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.884 14:11:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.884 14:11:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.884 14:11:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.884 14:11:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.884 14:11:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.884 14:11:30 env -- scripts/common.sh@344 -- # case "$op" in 00:06:05.884 14:11:30 env -- scripts/common.sh@345 -- # : 1 00:06:05.884 14:11:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.884 14:11:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.884 14:11:30 env -- scripts/common.sh@365 -- # decimal 1 00:06:05.884 14:11:30 env -- scripts/common.sh@353 -- # local d=1 00:06:05.884 14:11:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.884 14:11:30 env -- scripts/common.sh@355 -- # echo 1 00:06:05.884 14:11:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.884 14:11:30 env -- scripts/common.sh@366 -- # decimal 2 00:06:05.884 14:11:30 env -- scripts/common.sh@353 -- # local d=2 00:06:05.884 14:11:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.884 14:11:30 env -- scripts/common.sh@355 -- # echo 2 00:06:05.884 14:11:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.884 14:11:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.884 14:11:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.884 14:11:30 env -- scripts/common.sh@368 -- # return 0 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.884 --rc genhtml_branch_coverage=1 00:06:05.884 --rc genhtml_function_coverage=1 00:06:05.884 --rc genhtml_legend=1 00:06:05.884 --rc geninfo_all_blocks=1 00:06:05.884 --rc geninfo_unexecuted_blocks=1 00:06:05.884 00:06:05.884 ' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.884 --rc genhtml_branch_coverage=1 00:06:05.884 --rc genhtml_function_coverage=1 00:06:05.884 --rc genhtml_legend=1 00:06:05.884 --rc geninfo_all_blocks=1 00:06:05.884 --rc geninfo_unexecuted_blocks=1 00:06:05.884 00:06:05.884 ' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.884 --rc genhtml_branch_coverage=1 00:06:05.884 --rc genhtml_function_coverage=1 00:06:05.884 --rc genhtml_legend=1 00:06:05.884 --rc geninfo_all_blocks=1 00:06:05.884 --rc geninfo_unexecuted_blocks=1 00:06:05.884 00:06:05.884 ' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.884 --rc genhtml_branch_coverage=1 00:06:05.884 --rc genhtml_function_coverage=1 00:06:05.884 --rc genhtml_legend=1 00:06:05.884 --rc geninfo_all_blocks=1 00:06:05.884 --rc geninfo_unexecuted_blocks=1 00:06:05.884 00:06:05.884 ' 00:06:05.884 14:11:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.884 14:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.884 14:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.884 ************************************ 00:06:05.884 START TEST env_memory 00:06:05.884 ************************************ 00:06:05.884 14:11:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.884 00:06:05.884 00:06:05.884 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.884 http://cunit.sourceforge.net/ 00:06:05.884 00:06:05.884 00:06:05.884 Suite: memory 00:06:06.144 Test: alloc and free memory map ...[2024-12-10 14:11:30.735028] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:06.144 passed 00:06:06.144 Test: mem map translation ...[2024-12-10 14:11:30.779943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:06.144 [2024-12-10 14:11:30.780097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:06.144 [2024-12-10 14:11:30.780336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:06.144 [2024-12-10 14:11:30.780404] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:06.144 passed 00:06:06.144 Test: mem map registration ...[2024-12-10 14:11:30.848545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:06.144 [2024-12-10 14:11:30.848697] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:06.144 passed 00:06:06.144 Test: mem map adjacent registrations ...passed 00:06:06.144 00:06:06.144 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.144 suites 1 1 n/a 0 0 00:06:06.144 tests 4 4 4 0 0 00:06:06.144 asserts 152 152 152 0 n/a 00:06:06.144 00:06:06.144 Elapsed time = 0.242 seconds 00:06:06.144 00:06:06.144 real 0m0.304s 00:06:06.144 user 0m0.260s 00:06:06.144 sys 0m0.029s 00:06:06.144 14:11:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.144 ************************************ 00:06:06.144 END TEST env_memory 00:06:06.144 ************************************ 00:06:06.144 14:11:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:06.404 14:11:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:06.404 14:11:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.404 14:11:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.404 14:11:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.404 ************************************ 00:06:06.404 START TEST env_vtophys 00:06:06.404 ************************************ 00:06:06.404 14:11:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:06.404 EAL: lib.eal log level changed from notice to debug 00:06:06.404 EAL: Detected lcore 0 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 1 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 2 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 3 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 4 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 5 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 6 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 7 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 8 as core 0 on socket 0 00:06:06.404 EAL: Detected lcore 9 as core 0 on socket 0 00:06:06.404 EAL: Maximum logical cores by configuration: 128 00:06:06.404 EAL: Detected CPU lcores: 10 00:06:06.404 EAL: Detected NUMA nodes: 1 00:06:06.404 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:06.404 EAL: Detected shared linkage of DPDK 00:06:06.404 EAL: No shared files mode enabled, IPC will be disabled 00:06:06.404 EAL: Selected IOVA mode 'PA' 00:06:06.404 EAL: Probing VFIO support... 00:06:06.404 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:06.404 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:06.404 EAL: Ask a virtual area of 0x2e000 bytes 00:06:06.404 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:06.404 EAL: Setting up physically contiguous memory... 00:06:06.404 EAL: Setting maximum number of open files to 524288 00:06:06.404 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:06.404 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:06.404 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.404 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:06.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.404 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.404 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:06.404 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:06.404 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.404 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:06.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.404 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.404 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:06.404 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:06.404 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.404 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:06.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.404 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.404 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:06.404 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:06.404 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.404 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:06.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.404 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.404 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:06.404 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:06.404 EAL: Hugepages will be freed exactly as allocated. 00:06:06.404 EAL: No shared files mode enabled, IPC is disabled 00:06:06.404 EAL: No shared files mode enabled, IPC is disabled 00:06:06.663 EAL: TSC frequency is ~2490000 KHz 00:06:06.663 EAL: Main lcore 0 is ready (tid=7f466cb3aa40;cpuset=[0]) 00:06:06.663 EAL: Trying to obtain current memory policy. 00:06:06.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.663 EAL: Restoring previous memory policy: 0 00:06:06.663 EAL: request: mp_malloc_sync 00:06:06.663 EAL: No shared files mode enabled, IPC is disabled 00:06:06.663 EAL: Heap on socket 0 was expanded by 2MB 00:06:06.663 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:06.663 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:06.663 EAL: Mem event callback 'spdk:(nil)' registered 00:06:06.663 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:06.663 00:06:06.663 00:06:06.663 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.663 http://cunit.sourceforge.net/ 00:06:06.663 00:06:06.663 00:06:06.663 Suite: components_suite 00:06:06.936 Test: vtophys_malloc_test ...passed 00:06:06.936 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:06.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.936 EAL: Restoring previous memory policy: 4 00:06:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.936 EAL: request: mp_malloc_sync 00:06:06.936 EAL: No shared files mode enabled, IPC is disabled 00:06:06.936 EAL: Heap on socket 0 was expanded by 4MB 00:06:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.936 EAL: request: mp_malloc_sync 00:06:06.936 EAL: No shared files mode enabled, IPC is disabled 00:06:06.936 EAL: Heap on socket 0 was shrunk by 4MB 00:06:06.936 EAL: Trying to obtain current memory policy. 00:06:06.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.936 EAL: Restoring previous memory policy: 4 00:06:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.936 EAL: request: mp_malloc_sync 00:06:06.936 EAL: No shared files mode enabled, IPC is disabled 00:06:06.936 EAL: Heap on socket 0 was expanded by 6MB 00:06:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.936 EAL: request: mp_malloc_sync 00:06:06.936 EAL: No shared files mode enabled, IPC is disabled 00:06:06.936 EAL: Heap on socket 0 was shrunk by 6MB 00:06:06.936 EAL: Trying to obtain current memory policy. 00:06:06.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.936 EAL: Restoring previous memory policy: 4 00:06:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.936 EAL: request: mp_malloc_sync 00:06:06.936 EAL: No shared files mode enabled, IPC is disabled 00:06:06.936 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.236 EAL: Trying to obtain current memory policy. 00:06:07.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.236 EAL: Restoring previous memory policy: 4 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.236 EAL: Trying to obtain current memory policy. 00:06:07.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.236 EAL: Restoring previous memory policy: 4 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.236 EAL: Trying to obtain current memory policy. 00:06:07.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.236 EAL: Restoring previous memory policy: 4 00:06:07.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.236 EAL: request: mp_malloc_sync 00:06:07.236 EAL: No shared files mode enabled, IPC is disabled 00:06:07.236 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.495 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.495 EAL: request: mp_malloc_sync 00:06:07.495 EAL: No shared files mode enabled, IPC is disabled 00:06:07.495 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.495 EAL: Trying to obtain current memory policy. 00:06:07.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.495 EAL: Restoring previous memory policy: 4 00:06:07.495 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.495 EAL: request: mp_malloc_sync 00:06:07.495 EAL: No shared files mode enabled, IPC is disabled 00:06:07.495 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.755 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.755 EAL: request: mp_malloc_sync 00:06:07.755 EAL: No shared files mode enabled, IPC is disabled 00:06:07.755 EAL: Heap on socket 0 was shrunk by 130MB 00:06:08.015 EAL: Trying to obtain current memory policy. 00:06:08.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.015 EAL: Restoring previous memory policy: 4 00:06:08.015 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.015 EAL: request: mp_malloc_sync 00:06:08.015 EAL: No shared files mode enabled, IPC is disabled 00:06:08.015 EAL: Heap on socket 0 was expanded by 258MB 00:06:08.583 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.583 EAL: request: mp_malloc_sync 00:06:08.583 EAL: No shared files mode enabled, IPC is disabled 00:06:08.583 EAL: Heap on socket 0 was shrunk by 258MB 00:06:09.151 EAL: Trying to obtain current memory policy. 00:06:09.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.151 EAL: Restoring previous memory policy: 4 00:06:09.151 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.151 EAL: request: mp_malloc_sync 00:06:09.151 EAL: No shared files mode enabled, IPC is disabled 00:06:09.151 EAL: Heap on socket 0 was expanded by 514MB 00:06:10.090 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.090 EAL: request: mp_malloc_sync 00:06:10.090 EAL: No shared files mode enabled, IPC is disabled 00:06:10.090 EAL: Heap on socket 0 was shrunk by 514MB 00:06:11.028 EAL: Trying to obtain current memory policy. 00:06:11.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.028 EAL: Restoring previous memory policy: 4 00:06:11.028 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.028 EAL: request: mp_malloc_sync 00:06:11.028 EAL: No shared files mode enabled, IPC is disabled 00:06:11.028 EAL: Heap on socket 0 was expanded by 1026MB 00:06:12.934 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.934 EAL: request: mp_malloc_sync 00:06:12.934 EAL: No shared files mode enabled, IPC is disabled 00:06:12.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:14.839 passed 00:06:14.839 00:06:14.839 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.839 suites 1 1 n/a 0 0 00:06:14.839 tests 2 2 2 0 0 00:06:14.839 asserts 5670 5670 5670 0 n/a 00:06:14.839 00:06:14.839 Elapsed time = 8.019 seconds 00:06:14.839 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.839 EAL: request: mp_malloc_sync 00:06:14.839 EAL: No shared files mode enabled, IPC is disabled 00:06:14.839 EAL: Heap on socket 0 was shrunk by 2MB 00:06:14.839 EAL: No shared files mode enabled, IPC is disabled 00:06:14.839 EAL: No shared files mode enabled, IPC is disabled 00:06:14.839 EAL: No shared files mode enabled, IPC is disabled 00:06:14.839 00:06:14.839 real 0m8.366s 00:06:14.839 user 0m7.278s 00:06:14.839 sys 0m0.924s 00:06:14.839 14:11:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.839 14:11:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:14.839 ************************************ 00:06:14.839 END TEST env_vtophys 00:06:14.839 ************************************ 00:06:14.839 14:11:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:14.839 14:11:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.839 14:11:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.839 14:11:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.839 ************************************ 00:06:14.839 START TEST env_pci 00:06:14.839 ************************************ 00:06:14.839 14:11:39 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:14.839 00:06:14.839 00:06:14.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.839 http://cunit.sourceforge.net/ 00:06:14.839 00:06:14.839 00:06:14.839 Suite: pci 00:06:14.839 Test: pci_hook ...[2024-12-10 14:11:39.532959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58839 has claimed it 00:06:14.839 passed 00:06:14.839 00:06:14.839 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.839 suites 1 1 n/a 0 0 00:06:14.839 tests 1 1 1 0 0 00:06:14.839 asserts 25 25 25 0 n/a 00:06:14.839 00:06:14.839 Elapsed time = 0.007 seconds 00:06:14.839 EAL: Cannot find device (10000:00:01.0) 00:06:14.839 EAL: Failed to attach device on primary process 00:06:14.839 00:06:14.839 real 0m0.112s 00:06:14.839 user 0m0.046s 00:06:14.839 sys 0m0.065s 00:06:14.839 ************************************ 00:06:14.839 END TEST env_pci 00:06:14.839 ************************************ 00:06:14.839 14:11:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.839 14:11:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:14.839 14:11:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:14.839 14:11:39 env -- env/env.sh@15 -- # uname 00:06:14.839 14:11:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:14.839 14:11:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:14.839 14:11:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.839 14:11:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:14.839 14:11:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.839 14:11:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.099 ************************************ 00:06:15.099 START TEST env_dpdk_post_init 00:06:15.099 ************************************ 00:06:15.099 14:11:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:15.099 EAL: Detected CPU lcores: 10 00:06:15.099 EAL: Detected NUMA nodes: 1 00:06:15.099 EAL: Detected shared linkage of DPDK 00:06:15.099 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:15.099 EAL: Selected IOVA mode 'PA' 00:06:15.099 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:15.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:15.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:15.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:15.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:15.357 Starting DPDK initialization... 00:06:15.357 Starting SPDK post initialization... 00:06:15.357 SPDK NVMe probe 00:06:15.357 Attaching to 0000:00:10.0 00:06:15.357 Attaching to 0000:00:11.0 00:06:15.357 Attaching to 0000:00:12.0 00:06:15.357 Attaching to 0000:00:13.0 00:06:15.357 Attached to 0000:00:10.0 00:06:15.357 Attached to 0000:00:11.0 00:06:15.357 Attached to 0000:00:13.0 00:06:15.357 Attached to 0000:00:12.0 00:06:15.357 Cleaning up... 00:06:15.357 ************************************ 00:06:15.357 END TEST env_dpdk_post_init 00:06:15.357 ************************************ 00:06:15.357 00:06:15.357 real 0m0.317s 00:06:15.357 user 0m0.094s 00:06:15.357 sys 0m0.125s 00:06:15.357 14:11:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.357 14:11:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 14:11:40 env -- env/env.sh@26 -- # uname 00:06:15.357 14:11:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:15.357 14:11:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:15.357 14:11:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.357 14:11:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.357 14:11:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 START TEST env_mem_callbacks 00:06:15.357 ************************************ 00:06:15.357 14:11:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:15.357 EAL: Detected CPU lcores: 10 00:06:15.357 EAL: Detected NUMA nodes: 1 00:06:15.357 EAL: Detected shared linkage of DPDK 00:06:15.357 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:15.357 EAL: Selected IOVA mode 'PA' 00:06:15.616 00:06:15.616 00:06:15.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.616 http://cunit.sourceforge.net/ 00:06:15.616 00:06:15.616 00:06:15.616 Suite: memory 00:06:15.616 Test: test ... 00:06:15.616 register 0x200000200000 2097152 00:06:15.616 malloc 3145728 00:06:15.616 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:15.616 register 0x200000400000 4194304 00:06:15.616 buf 0x2000004fffc0 len 3145728 PASSED 00:06:15.616 malloc 64 00:06:15.616 buf 0x2000004ffec0 len 64 PASSED 00:06:15.616 malloc 4194304 00:06:15.616 register 0x200000800000 6291456 00:06:15.616 buf 0x2000009fffc0 len 4194304 PASSED 00:06:15.616 free 0x2000004fffc0 3145728 00:06:15.616 free 0x2000004ffec0 64 00:06:15.616 unregister 0x200000400000 4194304 PASSED 00:06:15.616 free 0x2000009fffc0 4194304 00:06:15.616 unregister 0x200000800000 6291456 PASSED 00:06:15.616 malloc 8388608 00:06:15.616 register 0x200000400000 10485760 00:06:15.616 buf 0x2000005fffc0 len 8388608 PASSED 00:06:15.616 free 0x2000005fffc0 8388608 00:06:15.616 unregister 0x200000400000 10485760 PASSED 00:06:15.616 passed 00:06:15.616 00:06:15.616 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.616 suites 1 1 n/a 0 0 00:06:15.616 tests 1 1 1 0 0 00:06:15.616 asserts 15 15 15 0 n/a 00:06:15.616 00:06:15.616 Elapsed time = 0.080 seconds 00:06:15.616 00:06:15.616 real 0m0.292s 00:06:15.616 user 0m0.107s 00:06:15.616 sys 0m0.080s 00:06:15.616 14:11:40 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.616 ************************************ 00:06:15.616 END TEST env_mem_callbacks 00:06:15.616 ************************************ 00:06:15.616 14:11:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:15.616 ************************************ 00:06:15.616 END TEST env 00:06:15.616 ************************************ 00:06:15.616 00:06:15.616 real 0m10.047s 00:06:15.616 user 0m8.055s 00:06:15.616 sys 0m1.604s 00:06:15.616 14:11:40 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.616 14:11:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.892 14:11:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:15.892 14:11:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.892 14:11:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.892 14:11:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.892 ************************************ 00:06:15.892 START TEST rpc 00:06:15.892 ************************************ 00:06:15.892 14:11:40 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:15.892 * Looking for test storage... 00:06:15.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:15.892 14:11:40 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.892 14:11:40 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.892 14:11:40 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.159 14:11:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.159 14:11:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.159 14:11:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.159 14:11:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.159 14:11:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.159 14:11:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.159 14:11:40 rpc -- scripts/common.sh@345 -- # : 1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.159 14:11:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.159 14:11:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.159 14:11:40 rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.159 14:11:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.159 14:11:40 rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.159 14:11:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.159 14:11:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.159 14:11:40 rpc -- scripts/common.sh@368 -- # return 0 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.159 --rc genhtml_branch_coverage=1 00:06:16.159 --rc genhtml_function_coverage=1 00:06:16.159 --rc genhtml_legend=1 00:06:16.159 --rc geninfo_all_blocks=1 00:06:16.159 --rc geninfo_unexecuted_blocks=1 00:06:16.159 00:06:16.159 ' 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.159 --rc genhtml_branch_coverage=1 00:06:16.159 --rc genhtml_function_coverage=1 00:06:16.159 --rc genhtml_legend=1 00:06:16.159 --rc geninfo_all_blocks=1 00:06:16.159 --rc geninfo_unexecuted_blocks=1 00:06:16.159 00:06:16.159 ' 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.159 --rc genhtml_branch_coverage=1 00:06:16.159 --rc genhtml_function_coverage=1 00:06:16.159 --rc genhtml_legend=1 00:06:16.159 --rc geninfo_all_blocks=1 00:06:16.159 --rc geninfo_unexecuted_blocks=1 00:06:16.159 00:06:16.159 ' 00:06:16.159 14:11:40 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.160 --rc genhtml_branch_coverage=1 00:06:16.160 --rc genhtml_function_coverage=1 00:06:16.160 --rc genhtml_legend=1 00:06:16.160 --rc geninfo_all_blocks=1 00:06:16.160 --rc geninfo_unexecuted_blocks=1 00:06:16.160 00:06:16.160 ' 00:06:16.160 14:11:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58966 00:06:16.160 14:11:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.160 14:11:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58966 00:06:16.160 14:11:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.160 14:11:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.160 [2024-12-10 14:11:40.905425] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:16.160 [2024-12-10 14:11:40.905964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:06:16.419 [2024-12-10 14:11:41.093194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.419 [2024-12-10 14:11:41.205492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:16.419 [2024-12-10 14:11:41.205556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58966' to capture a snapshot of events at runtime. 00:06:16.419 [2024-12-10 14:11:41.205570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.419 [2024-12-10 14:11:41.205584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.419 [2024-12-10 14:11:41.205594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58966 for offline analysis/debug. 00:06:16.419 [2024-12-10 14:11:41.206810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.355 14:11:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.355 14:11:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.355 14:11:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:17.355 14:11:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:17.355 14:11:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:17.355 14:11:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:17.355 14:11:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.355 14:11:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.355 14:11:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.355 ************************************ 00:06:17.355 START TEST rpc_integrity 00:06:17.355 ************************************ 00:06:17.355 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.356 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:17.356 { 00:06:17.356 "name": "Malloc0", 00:06:17.356 "aliases": [ 00:06:17.356 "e13c5ebe-ef24-49f5-92a3-1138d0d4bde4" 00:06:17.356 ], 00:06:17.356 "product_name": "Malloc disk", 00:06:17.356 "block_size": 512, 00:06:17.356 "num_blocks": 16384, 00:06:17.356 "uuid": "e13c5ebe-ef24-49f5-92a3-1138d0d4bde4", 00:06:17.356 "assigned_rate_limits": { 00:06:17.356 "rw_ios_per_sec": 0, 00:06:17.356 "rw_mbytes_per_sec": 0, 00:06:17.356 "r_mbytes_per_sec": 0, 00:06:17.356 "w_mbytes_per_sec": 0 00:06:17.356 }, 00:06:17.356 "claimed": false, 00:06:17.356 "zoned": false, 00:06:17.356 "supported_io_types": { 00:06:17.356 "read": true, 00:06:17.356 "write": true, 00:06:17.356 "unmap": true, 00:06:17.356 "flush": true, 00:06:17.356 "reset": true, 00:06:17.356 "nvme_admin": false, 00:06:17.356 "nvme_io": false, 00:06:17.356 "nvme_io_md": false, 00:06:17.356 "write_zeroes": true, 00:06:17.356 "zcopy": true, 00:06:17.356 "get_zone_info": false, 00:06:17.356 "zone_management": false, 00:06:17.356 "zone_append": false, 00:06:17.356 "compare": false, 00:06:17.356 "compare_and_write": false, 00:06:17.356 "abort": true, 00:06:17.356 "seek_hole": false, 00:06:17.356 "seek_data": false, 00:06:17.356 "copy": true, 00:06:17.356 "nvme_iov_md": false 00:06:17.356 }, 00:06:17.356 "memory_domains": [ 00:06:17.356 { 00:06:17.356 "dma_device_id": "system", 00:06:17.356 "dma_device_type": 1 00:06:17.356 }, 00:06:17.356 { 00:06:17.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.356 "dma_device_type": 2 00:06:17.356 } 00:06:17.356 ], 00:06:17.356 "driver_specific": {} 00:06:17.356 } 00:06:17.356 ]' 00:06:17.356 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:17.615 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:17.615 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.615 [2024-12-10 14:11:42.232378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:17.615 [2024-12-10 14:11:42.233302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.615 [2024-12-10 14:11:42.233342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:17.615 [2024-12-10 14:11:42.233358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.615 [2024-12-10 14:11:42.235846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.615 [2024-12-10 14:11:42.235896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:17.615 Passthru0 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.615 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.615 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.615 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:17.615 { 00:06:17.615 "name": "Malloc0", 00:06:17.615 "aliases": [ 00:06:17.615 "e13c5ebe-ef24-49f5-92a3-1138d0d4bde4" 00:06:17.615 ], 00:06:17.615 "product_name": "Malloc disk", 00:06:17.615 "block_size": 512, 00:06:17.615 "num_blocks": 16384, 00:06:17.615 "uuid": "e13c5ebe-ef24-49f5-92a3-1138d0d4bde4", 00:06:17.615 "assigned_rate_limits": { 00:06:17.615 "rw_ios_per_sec": 0, 00:06:17.615 "rw_mbytes_per_sec": 0, 00:06:17.615 "r_mbytes_per_sec": 0, 00:06:17.615 "w_mbytes_per_sec": 0 00:06:17.615 }, 00:06:17.615 "claimed": true, 00:06:17.615 "claim_type": "exclusive_write", 00:06:17.615 "zoned": false, 00:06:17.615 "supported_io_types": { 00:06:17.615 "read": true, 00:06:17.615 "write": true, 00:06:17.615 "unmap": true, 00:06:17.615 "flush": true, 00:06:17.615 "reset": true, 00:06:17.615 "nvme_admin": false, 00:06:17.615 "nvme_io": false, 00:06:17.615 "nvme_io_md": false, 00:06:17.615 "write_zeroes": true, 00:06:17.615 "zcopy": true, 00:06:17.615 "get_zone_info": false, 00:06:17.615 "zone_management": false, 00:06:17.615 "zone_append": false, 00:06:17.615 "compare": false, 00:06:17.615 "compare_and_write": false, 00:06:17.615 "abort": true, 00:06:17.615 "seek_hole": false, 00:06:17.615 "seek_data": false, 00:06:17.615 "copy": true, 00:06:17.615 "nvme_iov_md": false 00:06:17.615 }, 00:06:17.615 "memory_domains": [ 00:06:17.615 { 00:06:17.615 "dma_device_id": "system", 00:06:17.615 "dma_device_type": 1 00:06:17.615 }, 00:06:17.615 { 00:06:17.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.615 "dma_device_type": 2 00:06:17.615 } 00:06:17.615 ], 00:06:17.615 "driver_specific": {} 00:06:17.615 }, 00:06:17.615 { 00:06:17.615 "name": "Passthru0", 00:06:17.615 "aliases": [ 00:06:17.615 "fa88186f-99f0-5b92-a2bf-4e6e7b4a5209" 00:06:17.615 ], 00:06:17.615 "product_name": "passthru", 00:06:17.615 "block_size": 512, 00:06:17.615 "num_blocks": 16384, 00:06:17.615 "uuid": "fa88186f-99f0-5b92-a2bf-4e6e7b4a5209", 00:06:17.615 "assigned_rate_limits": { 00:06:17.615 "rw_ios_per_sec": 0, 00:06:17.615 "rw_mbytes_per_sec": 0, 00:06:17.615 "r_mbytes_per_sec": 0, 00:06:17.615 "w_mbytes_per_sec": 0 00:06:17.615 }, 00:06:17.615 "claimed": false, 00:06:17.615 "zoned": false, 00:06:17.615 "supported_io_types": { 00:06:17.615 "read": true, 00:06:17.615 "write": true, 00:06:17.616 "unmap": true, 00:06:17.616 "flush": true, 00:06:17.616 "reset": true, 00:06:17.616 "nvme_admin": false, 00:06:17.616 "nvme_io": false, 00:06:17.616 "nvme_io_md": false, 00:06:17.616 "write_zeroes": true, 00:06:17.616 "zcopy": true, 00:06:17.616 "get_zone_info": false, 00:06:17.616 "zone_management": false, 00:06:17.616 "zone_append": false, 00:06:17.616 "compare": false, 00:06:17.616 "compare_and_write": false, 00:06:17.616 "abort": true, 00:06:17.616 "seek_hole": false, 00:06:17.616 "seek_data": false, 00:06:17.616 "copy": true, 00:06:17.616 "nvme_iov_md": false 00:06:17.616 }, 00:06:17.616 "memory_domains": [ 00:06:17.616 { 00:06:17.616 "dma_device_id": "system", 00:06:17.616 "dma_device_type": 1 00:06:17.616 }, 00:06:17.616 { 00:06:17.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.616 "dma_device_type": 2 00:06:17.616 } 00:06:17.616 ], 00:06:17.616 "driver_specific": { 00:06:17.616 "passthru": { 00:06:17.616 "name": "Passthru0", 00:06:17.616 "base_bdev_name": "Malloc0" 00:06:17.616 } 00:06:17.616 } 00:06:17.616 } 00:06:17.616 ]' 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:17.616 ************************************ 00:06:17.616 END TEST rpc_integrity 00:06:17.616 ************************************ 00:06:17.616 14:11:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:17.616 00:06:17.616 real 0m0.338s 00:06:17.616 user 0m0.179s 00:06:17.616 sys 0m0.060s 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.616 14:11:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.875 14:11:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:17.875 14:11:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.875 14:11:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.875 14:11:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.875 ************************************ 00:06:17.875 START TEST rpc_plugins 00:06:17.875 ************************************ 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:17.875 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.875 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:17.875 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.875 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.875 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:17.875 { 00:06:17.875 "name": "Malloc1", 00:06:17.875 "aliases": [ 00:06:17.875 "74607b1a-5daa-4469-b2da-2f01d0465ece" 00:06:17.875 ], 00:06:17.875 "product_name": "Malloc disk", 00:06:17.875 "block_size": 4096, 00:06:17.875 "num_blocks": 256, 00:06:17.875 "uuid": "74607b1a-5daa-4469-b2da-2f01d0465ece", 00:06:17.875 "assigned_rate_limits": { 00:06:17.875 "rw_ios_per_sec": 0, 00:06:17.875 "rw_mbytes_per_sec": 0, 00:06:17.875 "r_mbytes_per_sec": 0, 00:06:17.875 "w_mbytes_per_sec": 0 00:06:17.875 }, 00:06:17.875 "claimed": false, 00:06:17.875 "zoned": false, 00:06:17.875 "supported_io_types": { 00:06:17.875 "read": true, 00:06:17.875 "write": true, 00:06:17.875 "unmap": true, 00:06:17.875 "flush": true, 00:06:17.875 "reset": true, 00:06:17.875 "nvme_admin": false, 00:06:17.876 "nvme_io": false, 00:06:17.876 "nvme_io_md": false, 00:06:17.876 "write_zeroes": true, 00:06:17.876 "zcopy": true, 00:06:17.876 "get_zone_info": false, 00:06:17.876 "zone_management": false, 00:06:17.876 "zone_append": false, 00:06:17.876 "compare": false, 00:06:17.876 "compare_and_write": false, 00:06:17.876 "abort": true, 00:06:17.876 "seek_hole": false, 00:06:17.876 "seek_data": false, 00:06:17.876 "copy": true, 00:06:17.876 "nvme_iov_md": false 00:06:17.876 }, 00:06:17.876 "memory_domains": [ 00:06:17.876 { 00:06:17.876 "dma_device_id": "system", 00:06:17.876 "dma_device_type": 1 00:06:17.876 }, 00:06:17.876 { 00:06:17.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.876 "dma_device_type": 2 00:06:17.876 } 00:06:17.876 ], 00:06:17.876 "driver_specific": {} 00:06:17.876 } 00:06:17.876 ]' 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:17.876 ************************************ 00:06:17.876 END TEST rpc_plugins 00:06:17.876 ************************************ 00:06:17.876 14:11:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:17.876 00:06:17.876 real 0m0.167s 00:06:17.876 user 0m0.088s 00:06:17.876 sys 0m0.034s 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.876 14:11:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.135 14:11:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:18.135 14:11:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.135 14:11:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.135 14:11:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.135 ************************************ 00:06:18.135 START TEST rpc_trace_cmd_test 00:06:18.135 ************************************ 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:18.135 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58966", 00:06:18.135 "tpoint_group_mask": "0x8", 00:06:18.135 "iscsi_conn": { 00:06:18.135 "mask": "0x2", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "scsi": { 00:06:18.135 "mask": "0x4", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "bdev": { 00:06:18.135 "mask": "0x8", 00:06:18.135 "tpoint_mask": "0xffffffffffffffff" 00:06:18.135 }, 00:06:18.135 "nvmf_rdma": { 00:06:18.135 "mask": "0x10", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "nvmf_tcp": { 00:06:18.135 "mask": "0x20", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "ftl": { 00:06:18.135 "mask": "0x40", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "blobfs": { 00:06:18.135 "mask": "0x80", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "dsa": { 00:06:18.135 "mask": "0x200", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "thread": { 00:06:18.135 "mask": "0x400", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "nvme_pcie": { 00:06:18.135 "mask": "0x800", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "iaa": { 00:06:18.135 "mask": "0x1000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "nvme_tcp": { 00:06:18.135 "mask": "0x2000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "bdev_nvme": { 00:06:18.135 "mask": "0x4000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "sock": { 00:06:18.135 "mask": "0x8000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "blob": { 00:06:18.135 "mask": "0x10000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "bdev_raid": { 00:06:18.135 "mask": "0x20000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 }, 00:06:18.135 "scheduler": { 00:06:18.135 "mask": "0x40000", 00:06:18.135 "tpoint_mask": "0x0" 00:06:18.135 } 00:06:18.135 }' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:18.135 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:18.395 ************************************ 00:06:18.395 END TEST rpc_trace_cmd_test 00:06:18.395 ************************************ 00:06:18.395 14:11:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:18.395 00:06:18.395 real 0m0.257s 00:06:18.395 user 0m0.202s 00:06:18.395 sys 0m0.048s 00:06:18.395 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.395 14:11:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 14:11:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:18.395 14:11:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:18.395 14:11:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:18.395 14:11:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.395 14:11:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.395 14:11:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 ************************************ 00:06:18.395 START TEST rpc_daemon_integrity 00:06:18.395 ************************************ 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.395 { 00:06:18.395 "name": "Malloc2", 00:06:18.395 "aliases": [ 00:06:18.395 "fdd1a574-3d6b-4a1e-9d64-d127aabc233d" 00:06:18.395 ], 00:06:18.395 "product_name": "Malloc disk", 00:06:18.395 "block_size": 512, 00:06:18.395 "num_blocks": 16384, 00:06:18.395 "uuid": "fdd1a574-3d6b-4a1e-9d64-d127aabc233d", 00:06:18.395 "assigned_rate_limits": { 00:06:18.395 "rw_ios_per_sec": 0, 00:06:18.395 "rw_mbytes_per_sec": 0, 00:06:18.395 "r_mbytes_per_sec": 0, 00:06:18.395 "w_mbytes_per_sec": 0 00:06:18.395 }, 00:06:18.395 "claimed": false, 00:06:18.395 "zoned": false, 00:06:18.395 "supported_io_types": { 00:06:18.395 "read": true, 00:06:18.395 "write": true, 00:06:18.395 "unmap": true, 00:06:18.395 "flush": true, 00:06:18.395 "reset": true, 00:06:18.395 "nvme_admin": false, 00:06:18.395 "nvme_io": false, 00:06:18.395 "nvme_io_md": false, 00:06:18.395 "write_zeroes": true, 00:06:18.395 "zcopy": true, 00:06:18.395 "get_zone_info": false, 00:06:18.395 "zone_management": false, 00:06:18.395 "zone_append": false, 00:06:18.395 "compare": false, 00:06:18.395 "compare_and_write": false, 00:06:18.395 "abort": true, 00:06:18.395 "seek_hole": false, 00:06:18.395 "seek_data": false, 00:06:18.395 "copy": true, 00:06:18.395 "nvme_iov_md": false 00:06:18.395 }, 00:06:18.395 "memory_domains": [ 00:06:18.395 { 00:06:18.395 "dma_device_id": "system", 00:06:18.395 "dma_device_type": 1 00:06:18.395 }, 00:06:18.395 { 00:06:18.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.395 "dma_device_type": 2 00:06:18.395 } 00:06:18.395 ], 00:06:18.395 "driver_specific": {} 00:06:18.395 } 00:06:18.395 ]' 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.395 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.395 [2024-12-10 14:11:43.226030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:18.395 [2024-12-10 14:11:43.226105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.395 [2024-12-10 14:11:43.226133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:18.395 [2024-12-10 14:11:43.226149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.654 [2024-12-10 14:11:43.229081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.654 [2024-12-10 14:11:43.229131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.654 Passthru0 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.654 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.654 { 00:06:18.654 "name": "Malloc2", 00:06:18.654 "aliases": [ 00:06:18.654 "fdd1a574-3d6b-4a1e-9d64-d127aabc233d" 00:06:18.654 ], 00:06:18.654 "product_name": "Malloc disk", 00:06:18.654 "block_size": 512, 00:06:18.654 "num_blocks": 16384, 00:06:18.654 "uuid": "fdd1a574-3d6b-4a1e-9d64-d127aabc233d", 00:06:18.654 "assigned_rate_limits": { 00:06:18.654 "rw_ios_per_sec": 0, 00:06:18.654 "rw_mbytes_per_sec": 0, 00:06:18.654 "r_mbytes_per_sec": 0, 00:06:18.654 "w_mbytes_per_sec": 0 00:06:18.654 }, 00:06:18.654 "claimed": true, 00:06:18.654 "claim_type": "exclusive_write", 00:06:18.654 "zoned": false, 00:06:18.654 "supported_io_types": { 00:06:18.654 "read": true, 00:06:18.654 "write": true, 00:06:18.654 "unmap": true, 00:06:18.654 "flush": true, 00:06:18.654 "reset": true, 00:06:18.654 "nvme_admin": false, 00:06:18.654 "nvme_io": false, 00:06:18.654 "nvme_io_md": false, 00:06:18.654 "write_zeroes": true, 00:06:18.654 "zcopy": true, 00:06:18.654 "get_zone_info": false, 00:06:18.654 "zone_management": false, 00:06:18.654 "zone_append": false, 00:06:18.654 "compare": false, 00:06:18.654 "compare_and_write": false, 00:06:18.654 "abort": true, 00:06:18.654 "seek_hole": false, 00:06:18.654 "seek_data": false, 00:06:18.654 "copy": true, 00:06:18.654 "nvme_iov_md": false 00:06:18.654 }, 00:06:18.654 "memory_domains": [ 00:06:18.654 { 00:06:18.654 "dma_device_id": "system", 00:06:18.654 "dma_device_type": 1 00:06:18.654 }, 00:06:18.654 { 00:06:18.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.654 "dma_device_type": 2 00:06:18.654 } 00:06:18.654 ], 00:06:18.654 "driver_specific": {} 00:06:18.654 }, 00:06:18.654 { 00:06:18.654 "name": "Passthru0", 00:06:18.654 "aliases": [ 00:06:18.654 "775fc1c6-d09b-5302-8ef0-420427e348e6" 00:06:18.655 ], 00:06:18.655 "product_name": "passthru", 00:06:18.655 "block_size": 512, 00:06:18.655 "num_blocks": 16384, 00:06:18.655 "uuid": "775fc1c6-d09b-5302-8ef0-420427e348e6", 00:06:18.655 "assigned_rate_limits": { 00:06:18.655 "rw_ios_per_sec": 0, 00:06:18.655 "rw_mbytes_per_sec": 0, 00:06:18.655 "r_mbytes_per_sec": 0, 00:06:18.655 "w_mbytes_per_sec": 0 00:06:18.655 }, 00:06:18.655 "claimed": false, 00:06:18.655 "zoned": false, 00:06:18.655 "supported_io_types": { 00:06:18.655 "read": true, 00:06:18.655 "write": true, 00:06:18.655 "unmap": true, 00:06:18.655 "flush": true, 00:06:18.655 "reset": true, 00:06:18.655 "nvme_admin": false, 00:06:18.655 "nvme_io": false, 00:06:18.655 "nvme_io_md": false, 00:06:18.655 "write_zeroes": true, 00:06:18.655 "zcopy": true, 00:06:18.655 "get_zone_info": false, 00:06:18.655 "zone_management": false, 00:06:18.655 "zone_append": false, 00:06:18.655 "compare": false, 00:06:18.655 "compare_and_write": false, 00:06:18.655 "abort": true, 00:06:18.655 "seek_hole": false, 00:06:18.655 "seek_data": false, 00:06:18.655 "copy": true, 00:06:18.655 "nvme_iov_md": false 00:06:18.655 }, 00:06:18.655 "memory_domains": [ 00:06:18.655 { 00:06:18.655 "dma_device_id": "system", 00:06:18.655 "dma_device_type": 1 00:06:18.655 }, 00:06:18.655 { 00:06:18.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.655 "dma_device_type": 2 00:06:18.655 } 00:06:18.655 ], 00:06:18.655 "driver_specific": { 00:06:18.655 "passthru": { 00:06:18.655 "name": "Passthru0", 00:06:18.655 "base_bdev_name": "Malloc2" 00:06:18.655 } 00:06:18.655 } 00:06:18.655 } 00:06:18.655 ]' 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:18.655 ************************************ 00:06:18.655 END TEST rpc_daemon_integrity 00:06:18.655 ************************************ 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:18.655 00:06:18.655 real 0m0.361s 00:06:18.655 user 0m0.195s 00:06:18.655 sys 0m0.064s 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.655 14:11:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.913 14:11:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:18.913 14:11:43 rpc -- rpc/rpc.sh@84 -- # killprocess 58966 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@958 -- # kill -0 58966 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@959 -- # uname 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:06:18.913 killing process with pid 58966 00:06:18.913 14:11:43 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.914 14:11:43 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.914 14:11:43 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:06:18.914 14:11:43 rpc -- common/autotest_common.sh@973 -- # kill 58966 00:06:18.914 14:11:43 rpc -- common/autotest_common.sh@978 -- # wait 58966 00:06:21.448 00:06:21.448 real 0m5.640s 00:06:21.448 user 0m6.101s 00:06:21.448 sys 0m1.052s 00:06:21.448 14:11:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.448 ************************************ 00:06:21.448 END TEST rpc 00:06:21.448 ************************************ 00:06:21.448 14:11:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.448 14:11:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.448 14:11:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.448 14:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.448 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.448 ************************************ 00:06:21.448 START TEST skip_rpc 00:06:21.448 ************************************ 00:06:21.448 14:11:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.716 * Looking for test storage... 00:06:21.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.716 14:11:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.716 --rc genhtml_branch_coverage=1 00:06:21.716 --rc genhtml_function_coverage=1 00:06:21.716 --rc genhtml_legend=1 00:06:21.716 --rc geninfo_all_blocks=1 00:06:21.716 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.716 --rc genhtml_branch_coverage=1 00:06:21.716 --rc genhtml_function_coverage=1 00:06:21.716 --rc genhtml_legend=1 00:06:21.716 --rc geninfo_all_blocks=1 00:06:21.716 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.716 --rc genhtml_branch_coverage=1 00:06:21.716 --rc genhtml_function_coverage=1 00:06:21.716 --rc genhtml_legend=1 00:06:21.716 --rc geninfo_all_blocks=1 00:06:21.716 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.716 --rc genhtml_branch_coverage=1 00:06:21.716 --rc genhtml_function_coverage=1 00:06:21.716 --rc genhtml_legend=1 00:06:21.716 --rc geninfo_all_blocks=1 00:06:21.716 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 14:11:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.716 14:11:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.716 14:11:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.716 14:11:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.717 ************************************ 00:06:21.717 START TEST skip_rpc 00:06:21.717 ************************************ 00:06:21.717 14:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:21.717 14:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59200 00:06:21.717 14:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:21.717 14:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.717 14:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:21.978 [2024-12-10 14:11:46.613446] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:21.978 [2024-12-10 14:11:46.613751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:06:21.978 [2024-12-10 14:11:46.800833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.237 [2024-12-10 14:11:46.948125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59200 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59200 ']' 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59200 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59200 00:06:27.511 killing process with pid 59200 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59200' 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59200 00:06:27.511 14:11:51 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59200 00:06:29.452 ************************************ 00:06:29.452 END TEST skip_rpc 00:06:29.452 ************************************ 00:06:29.452 00:06:29.452 real 0m7.730s 00:06:29.452 user 0m7.065s 00:06:29.452 sys 0m0.587s 00:06:29.452 14:11:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.452 14:11:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.736 14:11:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:29.736 14:11:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.736 14:11:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.736 14:11:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.736 ************************************ 00:06:29.736 START TEST skip_rpc_with_json 00:06:29.736 ************************************ 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59310 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59310 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59310 ']' 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.736 14:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.736 [2024-12-10 14:11:54.413653] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:29.736 [2024-12-10 14:11:54.413788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:06:29.994 [2024-12-10 14:11:54.595764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.994 [2024-12-10 14:11:54.744529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 [2024-12-10 14:11:55.661363] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:30.924 request: 00:06:30.924 { 00:06:30.924 "trtype": "tcp", 00:06:30.924 "method": "nvmf_get_transports", 00:06:30.924 "req_id": 1 00:06:30.924 } 00:06:30.924 Got JSON-RPC error response 00:06:30.924 response: 00:06:30.924 { 00:06:30.924 "code": -19, 00:06:30.924 "message": "No such device" 00:06:30.924 } 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 [2024-12-10 14:11:55.677463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.924 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.181 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.181 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.181 { 00:06:31.181 "subsystems": [ 00:06:31.181 { 00:06:31.181 "subsystem": "fsdev", 00:06:31.181 "config": [ 00:06:31.181 { 00:06:31.181 "method": "fsdev_set_opts", 00:06:31.181 "params": { 00:06:31.181 "fsdev_io_pool_size": 65535, 00:06:31.181 "fsdev_io_cache_size": 256 00:06:31.181 } 00:06:31.181 } 00:06:31.181 ] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "keyring", 00:06:31.181 "config": [] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "iobuf", 00:06:31.181 "config": [ 00:06:31.181 { 00:06:31.181 "method": "iobuf_set_options", 00:06:31.181 "params": { 00:06:31.181 "small_pool_count": 8192, 00:06:31.181 "large_pool_count": 1024, 00:06:31.181 "small_bufsize": 8192, 00:06:31.181 "large_bufsize": 135168, 00:06:31.181 "enable_numa": false 00:06:31.181 } 00:06:31.181 } 00:06:31.181 ] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "sock", 00:06:31.181 "config": [ 00:06:31.181 { 00:06:31.181 "method": "sock_set_default_impl", 00:06:31.181 "params": { 00:06:31.181 "impl_name": "posix" 00:06:31.181 } 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "method": "sock_impl_set_options", 00:06:31.181 "params": { 00:06:31.181 "impl_name": "ssl", 00:06:31.181 "recv_buf_size": 4096, 00:06:31.181 "send_buf_size": 4096, 00:06:31.181 "enable_recv_pipe": true, 00:06:31.181 "enable_quickack": false, 00:06:31.181 "enable_placement_id": 0, 00:06:31.181 "enable_zerocopy_send_server": true, 00:06:31.181 "enable_zerocopy_send_client": false, 00:06:31.181 "zerocopy_threshold": 0, 00:06:31.181 "tls_version": 0, 00:06:31.181 "enable_ktls": false 00:06:31.181 } 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "method": "sock_impl_set_options", 00:06:31.181 "params": { 00:06:31.181 "impl_name": "posix", 00:06:31.181 "recv_buf_size": 2097152, 00:06:31.181 "send_buf_size": 2097152, 00:06:31.181 "enable_recv_pipe": true, 00:06:31.181 "enable_quickack": false, 00:06:31.181 "enable_placement_id": 0, 00:06:31.181 "enable_zerocopy_send_server": true, 00:06:31.181 "enable_zerocopy_send_client": false, 00:06:31.181 "zerocopy_threshold": 0, 00:06:31.181 "tls_version": 0, 00:06:31.181 "enable_ktls": false 00:06:31.181 } 00:06:31.181 } 00:06:31.181 ] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "vmd", 00:06:31.181 "config": [] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "accel", 00:06:31.181 "config": [ 00:06:31.181 { 00:06:31.181 "method": "accel_set_options", 00:06:31.181 "params": { 00:06:31.181 "small_cache_size": 128, 00:06:31.181 "large_cache_size": 16, 00:06:31.181 "task_count": 2048, 00:06:31.181 "sequence_count": 2048, 00:06:31.181 "buf_count": 2048 00:06:31.181 } 00:06:31.181 } 00:06:31.181 ] 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "subsystem": "bdev", 00:06:31.181 "config": [ 00:06:31.181 { 00:06:31.181 "method": "bdev_set_options", 00:06:31.181 "params": { 00:06:31.181 "bdev_io_pool_size": 65535, 00:06:31.181 "bdev_io_cache_size": 256, 00:06:31.181 "bdev_auto_examine": true, 00:06:31.181 "iobuf_small_cache_size": 128, 00:06:31.181 "iobuf_large_cache_size": 16 00:06:31.181 } 00:06:31.181 }, 00:06:31.181 { 00:06:31.181 "method": "bdev_raid_set_options", 00:06:31.181 "params": { 00:06:31.181 "process_window_size_kb": 1024, 00:06:31.181 "process_max_bandwidth_mb_sec": 0 00:06:31.181 } 00:06:31.181 }, 00:06:31.181 { 00:06:31.182 "method": "bdev_iscsi_set_options", 00:06:31.182 "params": { 00:06:31.182 "timeout_sec": 30 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "bdev_nvme_set_options", 00:06:31.182 "params": { 00:06:31.182 "action_on_timeout": "none", 00:06:31.182 "timeout_us": 0, 00:06:31.182 "timeout_admin_us": 0, 00:06:31.182 "keep_alive_timeout_ms": 10000, 00:06:31.182 "arbitration_burst": 0, 00:06:31.182 "low_priority_weight": 0, 00:06:31.182 "medium_priority_weight": 0, 00:06:31.182 "high_priority_weight": 0, 00:06:31.182 "nvme_adminq_poll_period_us": 10000, 00:06:31.182 "nvme_ioq_poll_period_us": 0, 00:06:31.182 "io_queue_requests": 0, 00:06:31.182 "delay_cmd_submit": true, 00:06:31.182 "transport_retry_count": 4, 00:06:31.182 "bdev_retry_count": 3, 00:06:31.182 "transport_ack_timeout": 0, 00:06:31.182 "ctrlr_loss_timeout_sec": 0, 00:06:31.182 "reconnect_delay_sec": 0, 00:06:31.182 "fast_io_fail_timeout_sec": 0, 00:06:31.182 "disable_auto_failback": false, 00:06:31.182 "generate_uuids": false, 00:06:31.182 "transport_tos": 0, 00:06:31.182 "nvme_error_stat": false, 00:06:31.182 "rdma_srq_size": 0, 00:06:31.182 "io_path_stat": false, 00:06:31.182 "allow_accel_sequence": false, 00:06:31.182 "rdma_max_cq_size": 0, 00:06:31.182 "rdma_cm_event_timeout_ms": 0, 00:06:31.182 "dhchap_digests": [ 00:06:31.182 "sha256", 00:06:31.182 "sha384", 00:06:31.182 "sha512" 00:06:31.182 ], 00:06:31.182 "dhchap_dhgroups": [ 00:06:31.182 "null", 00:06:31.182 "ffdhe2048", 00:06:31.182 "ffdhe3072", 00:06:31.182 "ffdhe4096", 00:06:31.182 "ffdhe6144", 00:06:31.182 "ffdhe8192" 00:06:31.182 ] 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "bdev_nvme_set_hotplug", 00:06:31.182 "params": { 00:06:31.182 "period_us": 100000, 00:06:31.182 "enable": false 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "bdev_wait_for_examine" 00:06:31.182 } 00:06:31.182 ] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "scsi", 00:06:31.182 "config": null 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "scheduler", 00:06:31.182 "config": [ 00:06:31.182 { 00:06:31.182 "method": "framework_set_scheduler", 00:06:31.182 "params": { 00:06:31.182 "name": "static" 00:06:31.182 } 00:06:31.182 } 00:06:31.182 ] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "vhost_scsi", 00:06:31.182 "config": [] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "vhost_blk", 00:06:31.182 "config": [] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "ublk", 00:06:31.182 "config": [] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "nbd", 00:06:31.182 "config": [] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "nvmf", 00:06:31.182 "config": [ 00:06:31.182 { 00:06:31.182 "method": "nvmf_set_config", 00:06:31.182 "params": { 00:06:31.182 "discovery_filter": "match_any", 00:06:31.182 "admin_cmd_passthru": { 00:06:31.182 "identify_ctrlr": false 00:06:31.182 }, 00:06:31.182 "dhchap_digests": [ 00:06:31.182 "sha256", 00:06:31.182 "sha384", 00:06:31.182 "sha512" 00:06:31.182 ], 00:06:31.182 "dhchap_dhgroups": [ 00:06:31.182 "null", 00:06:31.182 "ffdhe2048", 00:06:31.182 "ffdhe3072", 00:06:31.182 "ffdhe4096", 00:06:31.182 "ffdhe6144", 00:06:31.182 "ffdhe8192" 00:06:31.182 ] 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "nvmf_set_max_subsystems", 00:06:31.182 "params": { 00:06:31.182 "max_subsystems": 1024 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "nvmf_set_crdt", 00:06:31.182 "params": { 00:06:31.182 "crdt1": 0, 00:06:31.182 "crdt2": 0, 00:06:31.182 "crdt3": 0 00:06:31.182 } 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "method": "nvmf_create_transport", 00:06:31.182 "params": { 00:06:31.182 "trtype": "TCP", 00:06:31.182 "max_queue_depth": 128, 00:06:31.182 "max_io_qpairs_per_ctrlr": 127, 00:06:31.182 "in_capsule_data_size": 4096, 00:06:31.182 "max_io_size": 131072, 00:06:31.182 "io_unit_size": 131072, 00:06:31.182 "max_aq_depth": 128, 00:06:31.182 "num_shared_buffers": 511, 00:06:31.182 "buf_cache_size": 4294967295, 00:06:31.182 "dif_insert_or_strip": false, 00:06:31.182 "zcopy": false, 00:06:31.182 "c2h_success": true, 00:06:31.182 "sock_priority": 0, 00:06:31.182 "abort_timeout_sec": 1, 00:06:31.182 "ack_timeout": 0, 00:06:31.182 "data_wr_pool_size": 0 00:06:31.182 } 00:06:31.182 } 00:06:31.182 ] 00:06:31.182 }, 00:06:31.182 { 00:06:31.182 "subsystem": "iscsi", 00:06:31.182 "config": [ 00:06:31.182 { 00:06:31.182 "method": "iscsi_set_options", 00:06:31.182 "params": { 00:06:31.182 "node_base": "iqn.2016-06.io.spdk", 00:06:31.182 "max_sessions": 128, 00:06:31.182 "max_connections_per_session": 2, 00:06:31.182 "max_queue_depth": 64, 00:06:31.182 "default_time2wait": 2, 00:06:31.182 "default_time2retain": 20, 00:06:31.182 "first_burst_length": 8192, 00:06:31.182 "immediate_data": true, 00:06:31.182 "allow_duplicated_isid": false, 00:06:31.182 "error_recovery_level": 0, 00:06:31.182 "nop_timeout": 60, 00:06:31.182 "nop_in_interval": 30, 00:06:31.182 "disable_chap": false, 00:06:31.182 "require_chap": false, 00:06:31.182 "mutual_chap": false, 00:06:31.182 "chap_group": 0, 00:06:31.182 "max_large_datain_per_connection": 64, 00:06:31.182 "max_r2t_per_connection": 4, 00:06:31.182 "pdu_pool_size": 36864, 00:06:31.182 "immediate_data_pool_size": 16384, 00:06:31.182 "data_out_pool_size": 2048 00:06:31.182 } 00:06:31.182 } 00:06:31.182 ] 00:06:31.182 } 00:06:31.182 ] 00:06:31.182 } 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59310 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59310 ']' 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59310 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59310 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.182 killing process with pid 59310 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59310' 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59310 00:06:31.182 14:11:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59310 00:06:33.708 14:11:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59366 00:06:33.708 14:11:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.708 14:11:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59366 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59366 ']' 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59366 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59366 00:06:38.970 killing process with pid 59366 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59366' 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59366 00:06:38.970 14:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59366 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.499 00:06:41.499 real 0m11.498s 00:06:41.499 user 0m10.860s 00:06:41.499 sys 0m0.964s 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.499 ************************************ 00:06:41.499 END TEST skip_rpc_with_json 00:06:41.499 ************************************ 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.499 14:12:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:41.499 14:12:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.499 14:12:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.499 14:12:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.499 ************************************ 00:06:41.499 START TEST skip_rpc_with_delay 00:06:41.499 ************************************ 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:41.499 14:12:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.499 [2024-12-10 14:12:05.989034] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:41.499 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:41.500 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.500 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.500 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.500 00:06:41.500 real 0m0.178s 00:06:41.500 user 0m0.089s 00:06:41.500 sys 0m0.087s 00:06:41.500 ************************************ 00:06:41.500 END TEST skip_rpc_with_delay 00:06:41.500 ************************************ 00:06:41.500 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.500 14:12:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:41.500 14:12:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:41.500 14:12:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:41.500 14:12:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:41.500 14:12:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.500 14:12:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.500 14:12:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.500 ************************************ 00:06:41.500 START TEST exit_on_failed_rpc_init 00:06:41.500 ************************************ 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59499 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59499 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59499 ']' 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.500 14:12:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:41.500 [2024-12-10 14:12:06.241349] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:41.500 [2024-12-10 14:12:06.241471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:06:41.757 [2024-12-10 14:12:06.420876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.757 [2024-12-10 14:12:06.530286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:42.690 14:12:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.947 [2024-12-10 14:12:07.530018] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:42.947 [2024-12-10 14:12:07.530363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59523 ] 00:06:42.947 [2024-12-10 14:12:07.712531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.247 [2024-12-10 14:12:07.846705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.247 [2024-12-10 14:12:07.846979] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:43.247 [2024-12-10 14:12:07.847003] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:43.247 [2024-12-10 14:12:07.847024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59499 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59499 ']' 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59499 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59499 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.521 killing process with pid 59499 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59499' 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59499 00:06:43.521 14:12:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59499 00:06:46.049 00:06:46.049 real 0m4.422s 00:06:46.049 user 0m4.817s 00:06:46.049 sys 0m0.633s 00:06:46.049 14:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.049 ************************************ 00:06:46.049 END TEST exit_on_failed_rpc_init 00:06:46.049 ************************************ 00:06:46.049 14:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.049 14:12:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:46.049 00:06:46.049 real 0m24.392s 00:06:46.049 user 0m23.038s 00:06:46.049 sys 0m2.623s 00:06:46.049 14:12:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.049 14:12:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.049 ************************************ 00:06:46.049 END TEST skip_rpc 00:06:46.049 ************************************ 00:06:46.049 14:12:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:46.049 14:12:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.049 14:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.049 14:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.049 ************************************ 00:06:46.049 START TEST rpc_client 00:06:46.049 ************************************ 00:06:46.049 14:12:10 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:46.049 * Looking for test storage... 00:06:46.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:46.049 14:12:10 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.049 14:12:10 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.049 14:12:10 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.308 14:12:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.308 --rc genhtml_branch_coverage=1 00:06:46.308 --rc genhtml_function_coverage=1 00:06:46.308 --rc genhtml_legend=1 00:06:46.308 --rc geninfo_all_blocks=1 00:06:46.308 --rc geninfo_unexecuted_blocks=1 00:06:46.308 00:06:46.308 ' 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.308 --rc genhtml_branch_coverage=1 00:06:46.308 --rc genhtml_function_coverage=1 00:06:46.308 --rc genhtml_legend=1 00:06:46.308 --rc geninfo_all_blocks=1 00:06:46.308 --rc geninfo_unexecuted_blocks=1 00:06:46.308 00:06:46.308 ' 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.308 --rc genhtml_branch_coverage=1 00:06:46.308 --rc genhtml_function_coverage=1 00:06:46.308 --rc genhtml_legend=1 00:06:46.308 --rc geninfo_all_blocks=1 00:06:46.308 --rc geninfo_unexecuted_blocks=1 00:06:46.308 00:06:46.308 ' 00:06:46.308 14:12:10 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.308 --rc genhtml_branch_coverage=1 00:06:46.308 --rc genhtml_function_coverage=1 00:06:46.308 --rc genhtml_legend=1 00:06:46.308 --rc geninfo_all_blocks=1 00:06:46.308 --rc geninfo_unexecuted_blocks=1 00:06:46.308 00:06:46.308 ' 00:06:46.308 14:12:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:46.308 OK 00:06:46.308 14:12:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:46.308 00:06:46.308 real 0m0.320s 00:06:46.308 user 0m0.165s 00:06:46.308 sys 0m0.171s 00:06:46.308 14:12:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.308 14:12:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:46.308 ************************************ 00:06:46.308 END TEST rpc_client 00:06:46.308 ************************************ 00:06:46.308 14:12:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:46.308 14:12:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.308 14:12:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.308 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.308 ************************************ 00:06:46.308 START TEST json_config 00:06:46.308 ************************************ 00:06:46.308 14:12:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.568 14:12:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.568 14:12:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.568 14:12:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.568 14:12:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.568 14:12:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.568 14:12:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:46.568 14:12:11 json_config -- scripts/common.sh@345 -- # : 1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.568 14:12:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.568 14:12:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@353 -- # local d=1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.568 14:12:11 json_config -- scripts/common.sh@355 -- # echo 1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.568 14:12:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@353 -- # local d=2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.568 14:12:11 json_config -- scripts/common.sh@355 -- # echo 2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.568 14:12:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.568 14:12:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.568 14:12:11 json_config -- scripts/common.sh@368 -- # return 0 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.568 --rc genhtml_branch_coverage=1 00:06:46.568 --rc genhtml_function_coverage=1 00:06:46.568 --rc genhtml_legend=1 00:06:46.568 --rc geninfo_all_blocks=1 00:06:46.568 --rc geninfo_unexecuted_blocks=1 00:06:46.568 00:06:46.568 ' 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.568 --rc genhtml_branch_coverage=1 00:06:46.568 --rc genhtml_function_coverage=1 00:06:46.568 --rc genhtml_legend=1 00:06:46.568 --rc geninfo_all_blocks=1 00:06:46.568 --rc geninfo_unexecuted_blocks=1 00:06:46.568 00:06:46.568 ' 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.568 --rc genhtml_branch_coverage=1 00:06:46.568 --rc genhtml_function_coverage=1 00:06:46.568 --rc genhtml_legend=1 00:06:46.568 --rc geninfo_all_blocks=1 00:06:46.568 --rc geninfo_unexecuted_blocks=1 00:06:46.568 00:06:46.568 ' 00:06:46.568 14:12:11 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.568 --rc genhtml_branch_coverage=1 00:06:46.568 --rc genhtml_function_coverage=1 00:06:46.568 --rc genhtml_legend=1 00:06:46.568 --rc geninfo_all_blocks=1 00:06:46.568 --rc geninfo_unexecuted_blocks=1 00:06:46.568 00:06:46.568 ' 00:06:46.568 14:12:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0170221-08c0-40d7-bc6c-09be5c3f45af 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f0170221-08c0-40d7-bc6c-09be5c3f45af 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.568 14:12:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.568 14:12:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.568 14:12:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.568 14:12:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.568 14:12:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.568 14:12:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.568 14:12:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.568 14:12:11 json_config -- paths/export.sh@5 -- # export PATH 00:06:46.568 14:12:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@51 -- # : 0 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.568 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.568 14:12:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.569 14:12:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.569 14:12:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:46.569 WARNING: No tests are enabled so not running JSON configuration tests 00:06:46.569 14:12:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:46.569 00:06:46.569 real 0m0.246s 00:06:46.569 user 0m0.141s 00:06:46.569 sys 0m0.105s 00:06:46.569 14:12:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.569 14:12:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 ************************************ 00:06:46.569 END TEST json_config 00:06:46.569 ************************************ 00:06:46.828 14:12:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:46.828 14:12:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.828 14:12:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.828 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.828 ************************************ 00:06:46.828 START TEST json_config_extra_key 00:06:46.828 ************************************ 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.828 14:12:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.828 --rc genhtml_branch_coverage=1 00:06:46.828 --rc genhtml_function_coverage=1 00:06:46.828 --rc genhtml_legend=1 00:06:46.828 --rc geninfo_all_blocks=1 00:06:46.828 --rc geninfo_unexecuted_blocks=1 00:06:46.828 00:06:46.828 ' 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.828 --rc genhtml_branch_coverage=1 00:06:46.828 --rc genhtml_function_coverage=1 00:06:46.828 --rc genhtml_legend=1 00:06:46.828 --rc geninfo_all_blocks=1 00:06:46.828 --rc geninfo_unexecuted_blocks=1 00:06:46.828 00:06:46.828 ' 00:06:46.828 14:12:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.828 --rc genhtml_branch_coverage=1 00:06:46.828 --rc genhtml_function_coverage=1 00:06:46.828 --rc genhtml_legend=1 00:06:46.828 --rc geninfo_all_blocks=1 00:06:46.829 --rc geninfo_unexecuted_blocks=1 00:06:46.829 00:06:46.829 ' 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.829 --rc genhtml_branch_coverage=1 00:06:46.829 --rc genhtml_function_coverage=1 00:06:46.829 --rc genhtml_legend=1 00:06:46.829 --rc geninfo_all_blocks=1 00:06:46.829 --rc geninfo_unexecuted_blocks=1 00:06:46.829 00:06:46.829 ' 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0170221-08c0-40d7-bc6c-09be5c3f45af 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f0170221-08c0-40d7-bc6c-09be5c3f45af 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.829 14:12:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.829 14:12:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.829 14:12:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.829 14:12:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.829 14:12:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.829 14:12:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.829 14:12:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.829 14:12:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:46.829 14:12:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.829 14:12:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:46.829 INFO: launching applications... 00:06:46.829 14:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59733 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:46.829 Waiting for target to run... 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59733 /var/tmp/spdk_tgt.sock 00:06:46.829 14:12:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59733 ']' 00:06:46.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.829 14:12:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.088 [2024-12-10 14:12:11.763486] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:47.088 [2024-12-10 14:12:11.763611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:06:47.348 [2024-12-10 14:12:12.153634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.608 [2024-12-10 14:12:12.263597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.176 00:06:48.176 INFO: shutting down applications... 00:06:48.176 14:12:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.177 14:12:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:48.177 14:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:48.177 14:12:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59733 ]] 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59733 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:48.177 14:12:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.745 14:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.745 14:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.745 14:12:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:48.745 14:12:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.313 14:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.313 14:12:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.313 14:12:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:49.313 14:12:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.881 14:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.881 14:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.881 14:12:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:49.881 14:12:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.139 14:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.139 14:12:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.139 14:12:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:50.139 14:12:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.709 14:12:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.709 14:12:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.709 14:12:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:50.709 14:12:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:06:51.276 SPDK target shutdown done 00:06:51.276 Success 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:51.276 14:12:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:51.276 14:12:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:51.276 00:06:51.276 real 0m4.565s 00:06:51.276 user 0m4.023s 00:06:51.276 sys 0m0.598s 00:06:51.276 14:12:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.276 14:12:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 END TEST json_config_extra_key 00:06:51.276 ************************************ 00:06:51.276 14:12:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.276 14:12:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.276 14:12:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.276 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 START TEST alias_rpc 00:06:51.276 ************************************ 00:06:51.276 14:12:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.536 * Looking for test storage... 00:06:51.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.536 14:12:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.536 --rc genhtml_branch_coverage=1 00:06:51.536 --rc genhtml_function_coverage=1 00:06:51.536 --rc genhtml_legend=1 00:06:51.536 --rc geninfo_all_blocks=1 00:06:51.536 --rc geninfo_unexecuted_blocks=1 00:06:51.536 00:06:51.536 ' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.536 --rc genhtml_branch_coverage=1 00:06:51.536 --rc genhtml_function_coverage=1 00:06:51.536 --rc genhtml_legend=1 00:06:51.536 --rc geninfo_all_blocks=1 00:06:51.536 --rc geninfo_unexecuted_blocks=1 00:06:51.536 00:06:51.536 ' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.536 --rc genhtml_branch_coverage=1 00:06:51.536 --rc genhtml_function_coverage=1 00:06:51.536 --rc genhtml_legend=1 00:06:51.536 --rc geninfo_all_blocks=1 00:06:51.536 --rc geninfo_unexecuted_blocks=1 00:06:51.536 00:06:51.536 ' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.536 --rc genhtml_branch_coverage=1 00:06:51.536 --rc genhtml_function_coverage=1 00:06:51.536 --rc genhtml_legend=1 00:06:51.536 --rc geninfo_all_blocks=1 00:06:51.536 --rc geninfo_unexecuted_blocks=1 00:06:51.536 00:06:51.536 ' 00:06:51.536 14:12:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.536 14:12:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59839 00:06:51.536 14:12:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.536 14:12:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59839 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59839 ']' 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.536 14:12:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.795 [2024-12-10 14:12:16.406591] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:51.795 [2024-12-10 14:12:16.406917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:06:51.795 [2024-12-10 14:12:16.588835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.054 [2024-12-10 14:12:16.702281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.991 14:12:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:52.991 14:12:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59839 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59839 ']' 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59839 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.991 14:12:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59839 00:06:53.250 14:12:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.250 14:12:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.250 killing process with pid 59839 00:06:53.250 14:12:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59839' 00:06:53.250 14:12:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 59839 00:06:53.250 14:12:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 59839 00:06:55.864 ************************************ 00:06:55.864 END TEST alias_rpc 00:06:55.864 ************************************ 00:06:55.864 00:06:55.864 real 0m4.176s 00:06:55.864 user 0m4.130s 00:06:55.864 sys 0m0.598s 00:06:55.864 14:12:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.864 14:12:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 14:12:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:55.864 14:12:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.864 14:12:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.864 14:12:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.864 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 ************************************ 00:06:55.864 START TEST spdkcli_tcp 00:06:55.864 ************************************ 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.864 * Looking for test storage... 00:06:55.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.864 14:12:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.864 --rc genhtml_branch_coverage=1 00:06:55.864 --rc genhtml_function_coverage=1 00:06:55.864 --rc genhtml_legend=1 00:06:55.864 --rc geninfo_all_blocks=1 00:06:55.864 --rc geninfo_unexecuted_blocks=1 00:06:55.864 00:06:55.864 ' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.864 --rc genhtml_branch_coverage=1 00:06:55.864 --rc genhtml_function_coverage=1 00:06:55.864 --rc genhtml_legend=1 00:06:55.864 --rc geninfo_all_blocks=1 00:06:55.864 --rc geninfo_unexecuted_blocks=1 00:06:55.864 00:06:55.864 ' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.864 --rc genhtml_branch_coverage=1 00:06:55.864 --rc genhtml_function_coverage=1 00:06:55.864 --rc genhtml_legend=1 00:06:55.864 --rc geninfo_all_blocks=1 00:06:55.864 --rc geninfo_unexecuted_blocks=1 00:06:55.864 00:06:55.864 ' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.864 --rc genhtml_branch_coverage=1 00:06:55.864 --rc genhtml_function_coverage=1 00:06:55.864 --rc genhtml_legend=1 00:06:55.864 --rc geninfo_all_blocks=1 00:06:55.864 --rc geninfo_unexecuted_blocks=1 00:06:55.864 00:06:55.864 ' 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59947 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:55.864 14:12:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59947 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59947 ']' 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.864 14:12:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 [2024-12-10 14:12:20.678618] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:06:55.864 [2024-12-10 14:12:20.678900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:06:56.123 [2024-12-10 14:12:20.860704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.382 [2024-12-10 14:12:20.975921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.382 [2024-12-10 14:12:20.976423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.321 14:12:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.321 14:12:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:57.321 14:12:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59969 00:06:57.321 14:12:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.321 14:12:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.321 [ 00:06:57.321 "bdev_malloc_delete", 00:06:57.321 "bdev_malloc_create", 00:06:57.321 "bdev_null_resize", 00:06:57.321 "bdev_null_delete", 00:06:57.321 "bdev_null_create", 00:06:57.321 "bdev_nvme_cuse_unregister", 00:06:57.321 "bdev_nvme_cuse_register", 00:06:57.321 "bdev_opal_new_user", 00:06:57.321 "bdev_opal_set_lock_state", 00:06:57.321 "bdev_opal_delete", 00:06:57.321 "bdev_opal_get_info", 00:06:57.321 "bdev_opal_create", 00:06:57.321 "bdev_nvme_opal_revert", 00:06:57.321 "bdev_nvme_opal_init", 00:06:57.321 "bdev_nvme_send_cmd", 00:06:57.321 "bdev_nvme_set_keys", 00:06:57.321 "bdev_nvme_get_path_iostat", 00:06:57.321 "bdev_nvme_get_mdns_discovery_info", 00:06:57.321 "bdev_nvme_stop_mdns_discovery", 00:06:57.321 "bdev_nvme_start_mdns_discovery", 00:06:57.321 "bdev_nvme_set_multipath_policy", 00:06:57.321 "bdev_nvme_set_preferred_path", 00:06:57.321 "bdev_nvme_get_io_paths", 00:06:57.321 "bdev_nvme_remove_error_injection", 00:06:57.321 "bdev_nvme_add_error_injection", 00:06:57.321 "bdev_nvme_get_discovery_info", 00:06:57.321 "bdev_nvme_stop_discovery", 00:06:57.321 "bdev_nvme_start_discovery", 00:06:57.321 "bdev_nvme_get_controller_health_info", 00:06:57.321 "bdev_nvme_disable_controller", 00:06:57.321 "bdev_nvme_enable_controller", 00:06:57.321 "bdev_nvme_reset_controller", 00:06:57.321 "bdev_nvme_get_transport_statistics", 00:06:57.321 "bdev_nvme_apply_firmware", 00:06:57.321 "bdev_nvme_detach_controller", 00:06:57.321 "bdev_nvme_get_controllers", 00:06:57.321 "bdev_nvme_attach_controller", 00:06:57.321 "bdev_nvme_set_hotplug", 00:06:57.321 "bdev_nvme_set_options", 00:06:57.321 "bdev_passthru_delete", 00:06:57.321 "bdev_passthru_create", 00:06:57.321 "bdev_lvol_set_parent_bdev", 00:06:57.321 "bdev_lvol_set_parent", 00:06:57.321 "bdev_lvol_check_shallow_copy", 00:06:57.321 "bdev_lvol_start_shallow_copy", 00:06:57.321 "bdev_lvol_grow_lvstore", 00:06:57.321 "bdev_lvol_get_lvols", 00:06:57.321 "bdev_lvol_get_lvstores", 00:06:57.321 "bdev_lvol_delete", 00:06:57.321 "bdev_lvol_set_read_only", 00:06:57.321 "bdev_lvol_resize", 00:06:57.321 "bdev_lvol_decouple_parent", 00:06:57.321 "bdev_lvol_inflate", 00:06:57.321 "bdev_lvol_rename", 00:06:57.321 "bdev_lvol_clone_bdev", 00:06:57.321 "bdev_lvol_clone", 00:06:57.321 "bdev_lvol_snapshot", 00:06:57.321 "bdev_lvol_create", 00:06:57.321 "bdev_lvol_delete_lvstore", 00:06:57.321 "bdev_lvol_rename_lvstore", 00:06:57.321 "bdev_lvol_create_lvstore", 00:06:57.321 "bdev_raid_set_options", 00:06:57.321 "bdev_raid_remove_base_bdev", 00:06:57.321 "bdev_raid_add_base_bdev", 00:06:57.321 "bdev_raid_delete", 00:06:57.321 "bdev_raid_create", 00:06:57.321 "bdev_raid_get_bdevs", 00:06:57.321 "bdev_error_inject_error", 00:06:57.321 "bdev_error_delete", 00:06:57.321 "bdev_error_create", 00:06:57.321 "bdev_split_delete", 00:06:57.321 "bdev_split_create", 00:06:57.321 "bdev_delay_delete", 00:06:57.321 "bdev_delay_create", 00:06:57.321 "bdev_delay_update_latency", 00:06:57.321 "bdev_zone_block_delete", 00:06:57.321 "bdev_zone_block_create", 00:06:57.321 "blobfs_create", 00:06:57.321 "blobfs_detect", 00:06:57.321 "blobfs_set_cache_size", 00:06:57.321 "bdev_xnvme_delete", 00:06:57.321 "bdev_xnvme_create", 00:06:57.321 "bdev_aio_delete", 00:06:57.321 "bdev_aio_rescan", 00:06:57.321 "bdev_aio_create", 00:06:57.321 "bdev_ftl_set_property", 00:06:57.321 "bdev_ftl_get_properties", 00:06:57.321 "bdev_ftl_get_stats", 00:06:57.321 "bdev_ftl_unmap", 00:06:57.321 "bdev_ftl_unload", 00:06:57.321 "bdev_ftl_delete", 00:06:57.321 "bdev_ftl_load", 00:06:57.321 "bdev_ftl_create", 00:06:57.321 "bdev_virtio_attach_controller", 00:06:57.321 "bdev_virtio_scsi_get_devices", 00:06:57.321 "bdev_virtio_detach_controller", 00:06:57.321 "bdev_virtio_blk_set_hotplug", 00:06:57.321 "bdev_iscsi_delete", 00:06:57.321 "bdev_iscsi_create", 00:06:57.321 "bdev_iscsi_set_options", 00:06:57.321 "accel_error_inject_error", 00:06:57.321 "ioat_scan_accel_module", 00:06:57.321 "dsa_scan_accel_module", 00:06:57.321 "iaa_scan_accel_module", 00:06:57.321 "keyring_file_remove_key", 00:06:57.321 "keyring_file_add_key", 00:06:57.321 "keyring_linux_set_options", 00:06:57.321 "fsdev_aio_delete", 00:06:57.321 "fsdev_aio_create", 00:06:57.321 "iscsi_get_histogram", 00:06:57.321 "iscsi_enable_histogram", 00:06:57.321 "iscsi_set_options", 00:06:57.321 "iscsi_get_auth_groups", 00:06:57.321 "iscsi_auth_group_remove_secret", 00:06:57.321 "iscsi_auth_group_add_secret", 00:06:57.321 "iscsi_delete_auth_group", 00:06:57.321 "iscsi_create_auth_group", 00:06:57.321 "iscsi_set_discovery_auth", 00:06:57.321 "iscsi_get_options", 00:06:57.321 "iscsi_target_node_request_logout", 00:06:57.321 "iscsi_target_node_set_redirect", 00:06:57.321 "iscsi_target_node_set_auth", 00:06:57.321 "iscsi_target_node_add_lun", 00:06:57.321 "iscsi_get_stats", 00:06:57.321 "iscsi_get_connections", 00:06:57.321 "iscsi_portal_group_set_auth", 00:06:57.321 "iscsi_start_portal_group", 00:06:57.321 "iscsi_delete_portal_group", 00:06:57.321 "iscsi_create_portal_group", 00:06:57.321 "iscsi_get_portal_groups", 00:06:57.321 "iscsi_delete_target_node", 00:06:57.321 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.322 "iscsi_target_node_add_pg_ig_maps", 00:06:57.322 "iscsi_create_target_node", 00:06:57.322 "iscsi_get_target_nodes", 00:06:57.322 "iscsi_delete_initiator_group", 00:06:57.322 "iscsi_initiator_group_remove_initiators", 00:06:57.322 "iscsi_initiator_group_add_initiators", 00:06:57.322 "iscsi_create_initiator_group", 00:06:57.322 "iscsi_get_initiator_groups", 00:06:57.322 "nvmf_set_crdt", 00:06:57.322 "nvmf_set_config", 00:06:57.322 "nvmf_set_max_subsystems", 00:06:57.322 "nvmf_stop_mdns_prr", 00:06:57.322 "nvmf_publish_mdns_prr", 00:06:57.322 "nvmf_subsystem_get_listeners", 00:06:57.322 "nvmf_subsystem_get_qpairs", 00:06:57.322 "nvmf_subsystem_get_controllers", 00:06:57.322 "nvmf_get_stats", 00:06:57.322 "nvmf_get_transports", 00:06:57.322 "nvmf_create_transport", 00:06:57.322 "nvmf_get_targets", 00:06:57.322 "nvmf_delete_target", 00:06:57.322 "nvmf_create_target", 00:06:57.322 "nvmf_subsystem_allow_any_host", 00:06:57.322 "nvmf_subsystem_set_keys", 00:06:57.322 "nvmf_subsystem_remove_host", 00:06:57.322 "nvmf_subsystem_add_host", 00:06:57.322 "nvmf_ns_remove_host", 00:06:57.322 "nvmf_ns_add_host", 00:06:57.322 "nvmf_subsystem_remove_ns", 00:06:57.322 "nvmf_subsystem_set_ns_ana_group", 00:06:57.322 "nvmf_subsystem_add_ns", 00:06:57.322 "nvmf_subsystem_listener_set_ana_state", 00:06:57.322 "nvmf_discovery_get_referrals", 00:06:57.322 "nvmf_discovery_remove_referral", 00:06:57.322 "nvmf_discovery_add_referral", 00:06:57.322 "nvmf_subsystem_remove_listener", 00:06:57.322 "nvmf_subsystem_add_listener", 00:06:57.322 "nvmf_delete_subsystem", 00:06:57.322 "nvmf_create_subsystem", 00:06:57.322 "nvmf_get_subsystems", 00:06:57.322 "env_dpdk_get_mem_stats", 00:06:57.322 "nbd_get_disks", 00:06:57.322 "nbd_stop_disk", 00:06:57.322 "nbd_start_disk", 00:06:57.322 "ublk_recover_disk", 00:06:57.322 "ublk_get_disks", 00:06:57.322 "ublk_stop_disk", 00:06:57.322 "ublk_start_disk", 00:06:57.322 "ublk_destroy_target", 00:06:57.322 "ublk_create_target", 00:06:57.322 "virtio_blk_create_transport", 00:06:57.322 "virtio_blk_get_transports", 00:06:57.322 "vhost_controller_set_coalescing", 00:06:57.322 "vhost_get_controllers", 00:06:57.322 "vhost_delete_controller", 00:06:57.322 "vhost_create_blk_controller", 00:06:57.322 "vhost_scsi_controller_remove_target", 00:06:57.322 "vhost_scsi_controller_add_target", 00:06:57.322 "vhost_start_scsi_controller", 00:06:57.322 "vhost_create_scsi_controller", 00:06:57.322 "thread_set_cpumask", 00:06:57.322 "scheduler_set_options", 00:06:57.322 "framework_get_governor", 00:06:57.322 "framework_get_scheduler", 00:06:57.322 "framework_set_scheduler", 00:06:57.322 "framework_get_reactors", 00:06:57.322 "thread_get_io_channels", 00:06:57.322 "thread_get_pollers", 00:06:57.322 "thread_get_stats", 00:06:57.322 "framework_monitor_context_switch", 00:06:57.322 "spdk_kill_instance", 00:06:57.322 "log_enable_timestamps", 00:06:57.322 "log_get_flags", 00:06:57.322 "log_clear_flag", 00:06:57.322 "log_set_flag", 00:06:57.322 "log_get_level", 00:06:57.322 "log_set_level", 00:06:57.322 "log_get_print_level", 00:06:57.322 "log_set_print_level", 00:06:57.322 "framework_enable_cpumask_locks", 00:06:57.322 "framework_disable_cpumask_locks", 00:06:57.322 "framework_wait_init", 00:06:57.322 "framework_start_init", 00:06:57.322 "scsi_get_devices", 00:06:57.322 "bdev_get_histogram", 00:06:57.322 "bdev_enable_histogram", 00:06:57.322 "bdev_set_qos_limit", 00:06:57.322 "bdev_set_qd_sampling_period", 00:06:57.322 "bdev_get_bdevs", 00:06:57.322 "bdev_reset_iostat", 00:06:57.322 "bdev_get_iostat", 00:06:57.322 "bdev_examine", 00:06:57.322 "bdev_wait_for_examine", 00:06:57.322 "bdev_set_options", 00:06:57.322 "accel_get_stats", 00:06:57.322 "accel_set_options", 00:06:57.322 "accel_set_driver", 00:06:57.322 "accel_crypto_key_destroy", 00:06:57.322 "accel_crypto_keys_get", 00:06:57.322 "accel_crypto_key_create", 00:06:57.322 "accel_assign_opc", 00:06:57.322 "accel_get_module_info", 00:06:57.322 "accel_get_opc_assignments", 00:06:57.322 "vmd_rescan", 00:06:57.322 "vmd_remove_device", 00:06:57.322 "vmd_enable", 00:06:57.322 "sock_get_default_impl", 00:06:57.322 "sock_set_default_impl", 00:06:57.322 "sock_impl_set_options", 00:06:57.322 "sock_impl_get_options", 00:06:57.322 "iobuf_get_stats", 00:06:57.322 "iobuf_set_options", 00:06:57.322 "keyring_get_keys", 00:06:57.322 "framework_get_pci_devices", 00:06:57.322 "framework_get_config", 00:06:57.322 "framework_get_subsystems", 00:06:57.322 "fsdev_set_opts", 00:06:57.322 "fsdev_get_opts", 00:06:57.322 "trace_get_info", 00:06:57.322 "trace_get_tpoint_group_mask", 00:06:57.322 "trace_disable_tpoint_group", 00:06:57.322 "trace_enable_tpoint_group", 00:06:57.322 "trace_clear_tpoint_mask", 00:06:57.322 "trace_set_tpoint_mask", 00:06:57.322 "notify_get_notifications", 00:06:57.322 "notify_get_types", 00:06:57.322 "spdk_get_version", 00:06:57.322 "rpc_get_methods" 00:06:57.322 ] 00:06:57.322 14:12:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.322 14:12:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.322 14:12:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59947 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59947 ']' 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59947 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.322 14:12:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59947 00:06:57.581 killing process with pid 59947 00:06:57.581 14:12:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.581 14:12:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.581 14:12:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59947' 00:06:57.581 14:12:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59947 00:06:57.581 14:12:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59947 00:07:00.115 ************************************ 00:07:00.115 END TEST spdkcli_tcp 00:07:00.115 ************************************ 00:07:00.115 00:07:00.115 real 0m4.260s 00:07:00.115 user 0m7.509s 00:07:00.115 sys 0m0.690s 00:07:00.115 14:12:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.115 14:12:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.115 14:12:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.115 14:12:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.115 14:12:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.115 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:07:00.115 ************************************ 00:07:00.115 START TEST dpdk_mem_utility 00:07:00.115 ************************************ 00:07:00.115 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.115 * Looking for test storage... 00:07:00.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:00.115 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.115 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.115 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.115 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:00.115 14:12:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.116 14:12:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.116 14:12:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.116 14:12:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.116 --rc genhtml_branch_coverage=1 00:07:00.116 --rc genhtml_function_coverage=1 00:07:00.116 --rc genhtml_legend=1 00:07:00.116 --rc geninfo_all_blocks=1 00:07:00.116 --rc geninfo_unexecuted_blocks=1 00:07:00.116 00:07:00.116 ' 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.116 --rc genhtml_branch_coverage=1 00:07:00.116 --rc genhtml_function_coverage=1 00:07:00.116 --rc genhtml_legend=1 00:07:00.116 --rc geninfo_all_blocks=1 00:07:00.116 --rc geninfo_unexecuted_blocks=1 00:07:00.116 00:07:00.116 ' 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.116 --rc genhtml_branch_coverage=1 00:07:00.116 --rc genhtml_function_coverage=1 00:07:00.116 --rc genhtml_legend=1 00:07:00.116 --rc geninfo_all_blocks=1 00:07:00.116 --rc geninfo_unexecuted_blocks=1 00:07:00.116 00:07:00.116 ' 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.116 --rc genhtml_branch_coverage=1 00:07:00.116 --rc genhtml_function_coverage=1 00:07:00.116 --rc genhtml_legend=1 00:07:00.116 --rc geninfo_all_blocks=1 00:07:00.116 --rc geninfo_unexecuted_blocks=1 00:07:00.116 00:07:00.116 ' 00:07:00.116 14:12:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:00.116 14:12:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60074 00:07:00.116 14:12:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.116 14:12:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60074 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60074 ']' 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.116 14:12:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.374 [2024-12-10 14:12:25.018106] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:00.374 [2024-12-10 14:12:25.018495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:07:00.374 [2024-12-10 14:12:25.204068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.632 [2024-12-10 14:12:25.313347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.569 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.569 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:01.569 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:01.569 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:01.569 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.569 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.569 { 00:07:01.569 "filename": "/tmp/spdk_mem_dump.txt" 00:07:01.569 } 00:07:01.569 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.569 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:01.569 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:01.569 1 heaps totaling size 824.000000 MiB 00:07:01.569 size: 824.000000 MiB heap id: 0 00:07:01.569 end heaps---------- 00:07:01.569 9 mempools totaling size 603.782043 MiB 00:07:01.569 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:01.569 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:01.569 size: 100.555481 MiB name: bdev_io_60074 00:07:01.569 size: 50.003479 MiB name: msgpool_60074 00:07:01.569 size: 36.509338 MiB name: fsdev_io_60074 00:07:01.569 size: 21.763794 MiB name: PDU_Pool 00:07:01.569 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:01.569 size: 4.133484 MiB name: evtpool_60074 00:07:01.569 size: 0.026123 MiB name: Session_Pool 00:07:01.569 end mempools------- 00:07:01.569 6 memzones totaling size 4.142822 MiB 00:07:01.569 size: 1.000366 MiB name: RG_ring_0_60074 00:07:01.569 size: 1.000366 MiB name: RG_ring_1_60074 00:07:01.569 size: 1.000366 MiB name: RG_ring_4_60074 00:07:01.569 size: 1.000366 MiB name: RG_ring_5_60074 00:07:01.569 size: 0.125366 MiB name: RG_ring_2_60074 00:07:01.569 size: 0.015991 MiB name: RG_ring_3_60074 00:07:01.569 end memzones------- 00:07:01.569 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:01.569 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:07:01.569 list of free elements. size: 16.781860 MiB 00:07:01.569 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:01.569 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:01.569 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:01.569 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:01.569 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:01.569 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:01.569 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:01.569 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:01.569 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:01.569 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:01.569 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:01.569 element at address: 0x20001b400000 with size: 0.563171 MiB 00:07:01.569 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:01.569 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:01.569 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:01.569 element at address: 0x200012c00000 with size: 0.433472 MiB 00:07:01.569 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:01.569 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:01.569 list of standard malloc elements. size: 199.287231 MiB 00:07:01.569 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:01.569 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:01.569 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:01.569 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:01.569 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:01.569 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:01.569 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:01.569 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:01.569 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:01.569 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:01.569 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:01.569 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:01.569 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:01.570 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:01.570 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:01.571 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:01.571 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:01.571 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:01.571 list of memzone associated elements. size: 607.930908 MiB 00:07:01.571 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:01.571 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:01.571 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:01.571 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:01.571 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:01.571 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60074_0 00:07:01.571 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:01.571 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60074_0 00:07:01.571 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:01.571 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60074_0 00:07:01.571 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:01.571 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:01.571 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:01.571 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:01.571 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:01.571 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60074_0 00:07:01.571 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:01.571 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60074 00:07:01.571 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:01.571 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60074 00:07:01.571 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:01.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:01.571 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:01.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:01.571 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:01.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:01.571 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:01.571 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:01.571 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:01.571 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60074 00:07:01.571 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:01.571 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60074 00:07:01.571 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:01.571 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60074 00:07:01.571 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:01.571 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60074 00:07:01.571 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:01.571 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60074 00:07:01.571 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:01.571 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60074 00:07:01.571 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:01.571 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:01.571 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:01.571 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:01.571 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:01.571 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:01.571 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:01.571 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60074 00:07:01.571 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:01.571 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60074 00:07:01.571 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:01.571 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:01.571 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:01.571 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:01.571 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:01.571 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60074 00:07:01.571 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:01.571 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:01.571 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:01.571 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60074 00:07:01.571 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:01.571 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60074 00:07:01.571 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:01.572 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60074 00:07:01.572 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:01.572 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:01.572 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:01.572 14:12:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60074 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60074 ']' 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60074 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60074 00:07:01.572 killing process with pid 60074 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60074' 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60074 00:07:01.572 14:12:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60074 00:07:04.109 00:07:04.109 real 0m4.082s 00:07:04.109 user 0m3.926s 00:07:04.109 sys 0m0.632s 00:07:04.109 14:12:28 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.109 ************************************ 00:07:04.109 END TEST dpdk_mem_utility 00:07:04.109 ************************************ 00:07:04.109 14:12:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 14:12:28 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:04.109 14:12:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.109 14:12:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.109 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 ************************************ 00:07:04.109 START TEST event 00:07:04.109 ************************************ 00:07:04.109 14:12:28 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:04.369 * Looking for test storage... 00:07:04.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:04.369 14:12:28 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.369 14:12:28 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.369 14:12:28 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.369 14:12:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.369 14:12:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.369 14:12:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.369 14:12:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.369 14:12:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.369 14:12:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.369 14:12:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.369 14:12:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.369 14:12:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.369 14:12:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.369 14:12:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.369 14:12:29 event -- scripts/common.sh@344 -- # case "$op" in 00:07:04.369 14:12:29 event -- scripts/common.sh@345 -- # : 1 00:07:04.369 14:12:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.369 14:12:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.369 14:12:29 event -- scripts/common.sh@365 -- # decimal 1 00:07:04.369 14:12:29 event -- scripts/common.sh@353 -- # local d=1 00:07:04.369 14:12:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.369 14:12:29 event -- scripts/common.sh@355 -- # echo 1 00:07:04.369 14:12:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.369 14:12:29 event -- scripts/common.sh@366 -- # decimal 2 00:07:04.369 14:12:29 event -- scripts/common.sh@353 -- # local d=2 00:07:04.369 14:12:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.369 14:12:29 event -- scripts/common.sh@355 -- # echo 2 00:07:04.369 14:12:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.369 14:12:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.369 14:12:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.369 14:12:29 event -- scripts/common.sh@368 -- # return 0 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.369 --rc genhtml_branch_coverage=1 00:07:04.369 --rc genhtml_function_coverage=1 00:07:04.369 --rc genhtml_legend=1 00:07:04.369 --rc geninfo_all_blocks=1 00:07:04.369 --rc geninfo_unexecuted_blocks=1 00:07:04.369 00:07:04.369 ' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.369 --rc genhtml_branch_coverage=1 00:07:04.369 --rc genhtml_function_coverage=1 00:07:04.369 --rc genhtml_legend=1 00:07:04.369 --rc geninfo_all_blocks=1 00:07:04.369 --rc geninfo_unexecuted_blocks=1 00:07:04.369 00:07:04.369 ' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.369 --rc genhtml_branch_coverage=1 00:07:04.369 --rc genhtml_function_coverage=1 00:07:04.369 --rc genhtml_legend=1 00:07:04.369 --rc geninfo_all_blocks=1 00:07:04.369 --rc geninfo_unexecuted_blocks=1 00:07:04.369 00:07:04.369 ' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.369 --rc genhtml_branch_coverage=1 00:07:04.369 --rc genhtml_function_coverage=1 00:07:04.369 --rc genhtml_legend=1 00:07:04.369 --rc geninfo_all_blocks=1 00:07:04.369 --rc geninfo_unexecuted_blocks=1 00:07:04.369 00:07:04.369 ' 00:07:04.369 14:12:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:04.369 14:12:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:04.369 14:12:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:04.369 14:12:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.369 14:12:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.369 ************************************ 00:07:04.369 START TEST event_perf 00:07:04.369 ************************************ 00:07:04.369 14:12:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:04.369 Running I/O for 1 seconds...[2024-12-10 14:12:29.116855] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:04.369 [2024-12-10 14:12:29.117064] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60182 ] 00:07:04.627 [2024-12-10 14:12:29.297864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.627 [2024-12-10 14:12:29.420161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.627 [2024-12-10 14:12:29.420329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.627 [2024-12-10 14:12:29.420446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.627 Running I/O for 1 seconds...[2024-12-10 14:12:29.420478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.005 00:07:06.005 lcore 0: 102741 00:07:06.005 lcore 1: 102740 00:07:06.005 lcore 2: 102741 00:07:06.005 lcore 3: 102741 00:07:06.005 done. 00:07:06.005 00:07:06.005 real 0m1.600s 00:07:06.005 user 0m4.330s 00:07:06.005 sys 0m0.127s 00:07:06.005 14:12:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.005 14:12:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.005 ************************************ 00:07:06.005 END TEST event_perf 00:07:06.005 ************************************ 00:07:06.005 14:12:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:06.005 14:12:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:06.005 14:12:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.005 14:12:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.005 ************************************ 00:07:06.005 START TEST event_reactor 00:07:06.005 ************************************ 00:07:06.005 14:12:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:06.005 [2024-12-10 14:12:30.803267] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:06.005 [2024-12-10 14:12:30.803646] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60221 ] 00:07:06.290 [2024-12-10 14:12:30.988525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.290 [2024-12-10 14:12:31.100839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.668 test_start 00:07:07.668 oneshot 00:07:07.668 tick 100 00:07:07.668 tick 100 00:07:07.668 tick 250 00:07:07.668 tick 100 00:07:07.668 tick 100 00:07:07.668 tick 100 00:07:07.668 tick 250 00:07:07.668 tick 500 00:07:07.668 tick 100 00:07:07.668 tick 100 00:07:07.668 tick 250 00:07:07.668 tick 100 00:07:07.668 tick 100 00:07:07.668 test_end 00:07:07.668 00:07:07.668 real 0m1.589s 00:07:07.668 user 0m1.373s 00:07:07.668 sys 0m0.106s 00:07:07.668 14:12:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.668 ************************************ 00:07:07.668 END TEST event_reactor 00:07:07.668 ************************************ 00:07:07.668 14:12:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:07.668 14:12:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.668 14:12:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.668 14:12:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.668 14:12:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.668 ************************************ 00:07:07.668 START TEST event_reactor_perf 00:07:07.668 ************************************ 00:07:07.668 14:12:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.668 [2024-12-10 14:12:32.460200] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:07.668 [2024-12-10 14:12:32.460487] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60258 ] 00:07:07.927 [2024-12-10 14:12:32.649535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.187 [2024-12-10 14:12:32.759953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.564 test_start 00:07:09.564 test_end 00:07:09.564 Performance: 381805 events per second 00:07:09.564 00:07:09.564 real 0m1.575s 00:07:09.564 user 0m1.359s 00:07:09.564 sys 0m0.106s 00:07:09.564 ************************************ 00:07:09.564 END TEST event_reactor_perf 00:07:09.564 ************************************ 00:07:09.564 14:12:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.564 14:12:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.564 14:12:34 event -- event/event.sh@49 -- # uname -s 00:07:09.564 14:12:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:09.564 14:12:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:09.564 14:12:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.564 14:12:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.564 14:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.564 ************************************ 00:07:09.564 START TEST event_scheduler 00:07:09.564 ************************************ 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:09.564 * Looking for test storage... 00:07:09.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.564 14:12:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.564 --rc genhtml_branch_coverage=1 00:07:09.564 --rc genhtml_function_coverage=1 00:07:09.564 --rc genhtml_legend=1 00:07:09.564 --rc geninfo_all_blocks=1 00:07:09.564 --rc geninfo_unexecuted_blocks=1 00:07:09.564 00:07:09.564 ' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.564 --rc genhtml_branch_coverage=1 00:07:09.564 --rc genhtml_function_coverage=1 00:07:09.564 --rc genhtml_legend=1 00:07:09.564 --rc geninfo_all_blocks=1 00:07:09.564 --rc geninfo_unexecuted_blocks=1 00:07:09.564 00:07:09.564 ' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.564 --rc genhtml_branch_coverage=1 00:07:09.564 --rc genhtml_function_coverage=1 00:07:09.564 --rc genhtml_legend=1 00:07:09.564 --rc geninfo_all_blocks=1 00:07:09.564 --rc geninfo_unexecuted_blocks=1 00:07:09.564 00:07:09.564 ' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.564 --rc genhtml_branch_coverage=1 00:07:09.564 --rc genhtml_function_coverage=1 00:07:09.564 --rc genhtml_legend=1 00:07:09.564 --rc geninfo_all_blocks=1 00:07:09.564 --rc geninfo_unexecuted_blocks=1 00:07:09.564 00:07:09.564 ' 00:07:09.564 14:12:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:09.564 14:12:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60334 00:07:09.564 14:12:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:09.564 14:12:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.564 14:12:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60334 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60334 ']' 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.564 14:12:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.824 [2024-12-10 14:12:34.397167] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:09.824 [2024-12-10 14:12:34.397324] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:07:09.824 [2024-12-10 14:12:34.585432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.083 [2024-12-10 14:12:34.713356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.083 [2024-12-10 14:12:34.713581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.083 [2024-12-10 14:12:34.713775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.083 [2024-12-10 14:12:34.713826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.649 14:12:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.649 14:12:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:10.649 14:12:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:10.649 14:12:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.649 14:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.649 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.649 POWER: Cannot set governor of lcore 0 to performance 00:07:10.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.649 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.649 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.649 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.649 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:10.649 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:10.649 POWER: Unable to set Power Management Environment for lcore 0 00:07:10.649 [2024-12-10 14:12:35.302800] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:10.649 [2024-12-10 14:12:35.302829] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:10.649 [2024-12-10 14:12:35.302841] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:10.649 [2024-12-10 14:12:35.302863] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:10.649 [2024-12-10 14:12:35.302875] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:10.649 [2024-12-10 14:12:35.302888] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:10.649 14:12:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.649 14:12:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:10.650 14:12:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.650 14:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 [2024-12-10 14:12:35.639645] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:10.907 14:12:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:10.907 14:12:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.907 14:12:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 ************************************ 00:07:10.907 START TEST scheduler_create_thread 00:07:10.907 ************************************ 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 2 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 3 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 4 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 5 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 6 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.907 7 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.907 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.165 8 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.165 9 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.165 10 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.165 14:12:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.537 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.537 14:12:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:12.537 14:12:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:12.537 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.537 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.103 14:12:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:13.103 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.103 14:12:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.036 14:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.036 14:12:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:14.036 14:12:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:14.036 14:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.036 14:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.970 ************************************ 00:07:14.970 END TEST scheduler_create_thread 00:07:14.970 ************************************ 00:07:14.970 14:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.970 00:07:14.970 real 0m3.882s 00:07:14.970 user 0m0.030s 00:07:14.970 sys 0m0.006s 00:07:14.970 14:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.970 14:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.970 14:12:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:14.970 14:12:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60334 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60334 ']' 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60334 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60334 00:07:14.970 killing process with pid 60334 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60334' 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60334 00:07:14.970 14:12:39 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60334 00:07:15.229 [2024-12-10 14:12:39.918699] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:16.609 00:07:16.609 real 0m7.027s 00:07:16.609 user 0m15.266s 00:07:16.609 sys 0m0.591s 00:07:16.609 14:12:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.609 ************************************ 00:07:16.609 END TEST event_scheduler 00:07:16.609 14:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.609 ************************************ 00:07:16.609 14:12:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:16.609 14:12:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:16.609 14:12:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.609 14:12:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.609 14:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.609 ************************************ 00:07:16.609 START TEST app_repeat 00:07:16.609 ************************************ 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60456 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:16.609 Process app_repeat pid: 60456 00:07:16.609 spdk_app_start Round 0 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60456' 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:16.609 14:12:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60456 /var/tmp/spdk-nbd.sock 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.609 14:12:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.609 [2024-12-10 14:12:41.256606] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:16.609 [2024-12-10 14:12:41.256904] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:07:16.868 [2024-12-10 14:12:41.444082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.868 [2024-12-10 14:12:41.555885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.868 [2024-12-10 14:12:41.555915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.437 14:12:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.437 14:12:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:17.437 14:12:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.696 Malloc0 00:07:17.696 14:12:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.955 Malloc1 00:07:17.955 14:12:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.955 14:12:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:18.214 /dev/nbd0 00:07:18.214 14:12:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.214 14:12:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.214 1+0 records in 00:07:18.214 1+0 records out 00:07:18.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334768 s, 12.2 MB/s 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.214 14:12:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.215 14:12:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.215 14:12:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.215 14:12:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.215 14:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.215 14:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.215 14:12:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.474 /dev/nbd1 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.474 1+0 records in 00:07:18.474 1+0 records out 00:07:18.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687059 s, 6.0 MB/s 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.474 14:12:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.474 14:12:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.733 { 00:07:18.733 "nbd_device": "/dev/nbd0", 00:07:18.733 "bdev_name": "Malloc0" 00:07:18.733 }, 00:07:18.733 { 00:07:18.733 "nbd_device": "/dev/nbd1", 00:07:18.733 "bdev_name": "Malloc1" 00:07:18.733 } 00:07:18.733 ]' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.733 { 00:07:18.733 "nbd_device": "/dev/nbd0", 00:07:18.733 "bdev_name": "Malloc0" 00:07:18.733 }, 00:07:18.733 { 00:07:18.733 "nbd_device": "/dev/nbd1", 00:07:18.733 "bdev_name": "Malloc1" 00:07:18.733 } 00:07:18.733 ]' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.733 /dev/nbd1' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.733 /dev/nbd1' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.733 256+0 records in 00:07:18.733 256+0 records out 00:07:18.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054567 s, 192 MB/s 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.733 256+0 records in 00:07:18.733 256+0 records out 00:07:18.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027239 s, 38.5 MB/s 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.733 14:12:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.994 256+0 records in 00:07:18.994 256+0 records out 00:07:18.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317601 s, 33.0 MB/s 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.994 14:12:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.253 14:12:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.253 14:12:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.254 14:12:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.254 14:12:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.254 14:12:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.254 14:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.512 14:12:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.770 14:12:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.770 14:12:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.770 14:12:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.770 14:12:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:20.028 14:12:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.401 [2024-12-10 14:12:45.899726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.401 [2024-12-10 14:12:46.007845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.401 [2024-12-10 14:12:46.007846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.401 [2024-12-10 14:12:46.202717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.401 [2024-12-10 14:12:46.202788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:23.302 spdk_app_start Round 1 00:07:23.302 14:12:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:23.302 14:12:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:23.302 14:12:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60456 /var/tmp/spdk-nbd.sock 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.302 14:12:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:23.302 14:12:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.560 Malloc0 00:07:23.560 14:12:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.818 Malloc1 00:07:23.818 14:12:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.818 14:12:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.819 14:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.078 /dev/nbd0 00:07:24.078 14:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.078 14:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.078 1+0 records in 00:07:24.078 1+0 records out 00:07:24.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252368 s, 16.2 MB/s 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.078 14:12:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.078 14:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.078 14:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.078 14:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.336 /dev/nbd1 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.336 1+0 records in 00:07:24.336 1+0 records out 00:07:24.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451626 s, 9.1 MB/s 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.336 14:12:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.336 14:12:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.337 14:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.597 { 00:07:24.597 "nbd_device": "/dev/nbd0", 00:07:24.597 "bdev_name": "Malloc0" 00:07:24.597 }, 00:07:24.597 { 00:07:24.597 "nbd_device": "/dev/nbd1", 00:07:24.597 "bdev_name": "Malloc1" 00:07:24.597 } 00:07:24.597 ]' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.597 { 00:07:24.597 "nbd_device": "/dev/nbd0", 00:07:24.597 "bdev_name": "Malloc0" 00:07:24.597 }, 00:07:24.597 { 00:07:24.597 "nbd_device": "/dev/nbd1", 00:07:24.597 "bdev_name": "Malloc1" 00:07:24.597 } 00:07:24.597 ]' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.597 /dev/nbd1' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.597 /dev/nbd1' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.597 256+0 records in 00:07:24.597 256+0 records out 00:07:24.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134122 s, 78.2 MB/s 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.597 14:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.857 256+0 records in 00:07:24.857 256+0 records out 00:07:24.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315765 s, 33.2 MB/s 00:07:24.857 14:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.857 14:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.857 256+0 records in 00:07:24.857 256+0 records out 00:07:24.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342691 s, 30.6 MB/s 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.858 14:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.117 14:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.375 14:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.634 14:12:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.634 14:12:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.893 14:12:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.270 [2024-12-10 14:12:51.863956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.270 [2024-12-10 14:12:51.976209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.270 [2024-12-10 14:12:51.976230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.529 [2024-12-10 14:12:52.169014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.529 [2024-12-10 14:12:52.169100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:28.908 14:12:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:28.908 spdk_app_start Round 2 00:07:28.908 14:12:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:28.908 14:12:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60456 /var/tmp/spdk-nbd.sock 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.908 14:12:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.168 14:12:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.168 14:12:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:29.168 14:12:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.427 Malloc0 00:07:29.427 14:12:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.686 Malloc1 00:07:29.686 14:12:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.686 14:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.946 /dev/nbd0 00:07:29.946 14:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.946 14:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.946 1+0 records in 00:07:29.946 1+0 records out 00:07:29.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363749 s, 11.3 MB/s 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:29.946 14:12:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:29.946 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.946 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.946 14:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:30.204 /dev/nbd1 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.204 1+0 records in 00:07:30.204 1+0 records out 00:07:30.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353585 s, 11.6 MB/s 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:30.204 14:12:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.204 14:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.465 14:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.466 { 00:07:30.466 "nbd_device": "/dev/nbd0", 00:07:30.466 "bdev_name": "Malloc0" 00:07:30.466 }, 00:07:30.466 { 00:07:30.466 "nbd_device": "/dev/nbd1", 00:07:30.466 "bdev_name": "Malloc1" 00:07:30.466 } 00:07:30.466 ]' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.466 { 00:07:30.466 "nbd_device": "/dev/nbd0", 00:07:30.466 "bdev_name": "Malloc0" 00:07:30.466 }, 00:07:30.466 { 00:07:30.466 "nbd_device": "/dev/nbd1", 00:07:30.466 "bdev_name": "Malloc1" 00:07:30.466 } 00:07:30.466 ]' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.466 /dev/nbd1' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.466 /dev/nbd1' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.466 256+0 records in 00:07:30.466 256+0 records out 00:07:30.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138292 s, 75.8 MB/s 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.466 14:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.728 256+0 records in 00:07:30.728 256+0 records out 00:07:30.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298619 s, 35.1 MB/s 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.728 256+0 records in 00:07:30.728 256+0 records out 00:07:30.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305239 s, 34.4 MB/s 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.728 14:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.987 14:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.246 14:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.246 14:12:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:31.506 14:12:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:31.506 14:12:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:31.765 14:12:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.145 [2024-12-10 14:12:57.676954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.145 [2024-12-10 14:12:57.786252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.145 [2024-12-10 14:12:57.786252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.404 [2024-12-10 14:12:57.982339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.404 [2024-12-10 14:12:57.982415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:34.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.814 14:12:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60456 /var/tmp/spdk-nbd.sock 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60456 ']' 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.814 14:12:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:35.073 14:12:59 event.app_repeat -- event/event.sh@39 -- # killprocess 60456 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60456 ']' 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60456 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60456 00:07:35.073 killing process with pid 60456 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60456' 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60456 00:07:35.073 14:12:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60456 00:07:36.008 spdk_app_start is called in Round 0. 00:07:36.008 Shutdown signal received, stop current app iteration 00:07:36.008 Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 reinitialization... 00:07:36.008 spdk_app_start is called in Round 1. 00:07:36.008 Shutdown signal received, stop current app iteration 00:07:36.008 Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 reinitialization... 00:07:36.008 spdk_app_start is called in Round 2. 00:07:36.008 Shutdown signal received, stop current app iteration 00:07:36.008 Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 reinitialization... 00:07:36.008 spdk_app_start is called in Round 3. 00:07:36.008 Shutdown signal received, stop current app iteration 00:07:36.267 14:13:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:36.267 14:13:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:36.267 00:07:36.267 real 0m19.668s 00:07:36.267 user 0m41.754s 00:07:36.267 sys 0m3.357s 00:07:36.267 14:13:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.267 14:13:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.267 ************************************ 00:07:36.267 END TEST app_repeat 00:07:36.267 ************************************ 00:07:36.267 14:13:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:36.267 14:13:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:36.267 14:13:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.267 14:13:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.267 14:13:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.267 ************************************ 00:07:36.267 START TEST cpu_locks 00:07:36.267 ************************************ 00:07:36.267 14:13:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:36.267 * Looking for test storage... 00:07:36.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:36.267 14:13:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.267 14:13:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.267 14:13:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.526 14:13:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.526 --rc genhtml_branch_coverage=1 00:07:36.526 --rc genhtml_function_coverage=1 00:07:36.526 --rc genhtml_legend=1 00:07:36.526 --rc geninfo_all_blocks=1 00:07:36.526 --rc geninfo_unexecuted_blocks=1 00:07:36.526 00:07:36.526 ' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.526 --rc genhtml_branch_coverage=1 00:07:36.526 --rc genhtml_function_coverage=1 00:07:36.526 --rc genhtml_legend=1 00:07:36.526 --rc geninfo_all_blocks=1 00:07:36.526 --rc geninfo_unexecuted_blocks=1 00:07:36.526 00:07:36.526 ' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.526 --rc genhtml_branch_coverage=1 00:07:36.526 --rc genhtml_function_coverage=1 00:07:36.526 --rc genhtml_legend=1 00:07:36.526 --rc geninfo_all_blocks=1 00:07:36.526 --rc geninfo_unexecuted_blocks=1 00:07:36.526 00:07:36.526 ' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.526 --rc genhtml_branch_coverage=1 00:07:36.526 --rc genhtml_function_coverage=1 00:07:36.526 --rc genhtml_legend=1 00:07:36.526 --rc geninfo_all_blocks=1 00:07:36.526 --rc geninfo_unexecuted_blocks=1 00:07:36.526 00:07:36.526 ' 00:07:36.526 14:13:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:36.526 14:13:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:36.526 14:13:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:36.526 14:13:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.526 14:13:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 ************************************ 00:07:36.526 START TEST default_locks 00:07:36.526 ************************************ 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60904 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60904 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60904 ']' 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.526 14:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 [2024-12-10 14:13:01.273630] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:36.526 [2024-12-10 14:13:01.273762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60904 ] 00:07:36.785 [2024-12-10 14:13:01.455888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.785 [2024-12-10 14:13:01.572465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.720 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.720 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:37.720 14:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60904 00:07:37.720 14:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60904 00:07:37.720 14:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60904 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60904 ']' 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60904 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60904 00:07:38.287 killing process with pid 60904 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60904' 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60904 00:07:38.287 14:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60904 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60904 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60904 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.821 ERROR: process (pid: 60904) is no longer running 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60904 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60904 ']' 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.821 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60904) - No such process 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:40.822 00:07:40.822 real 0m4.213s 00:07:40.822 user 0m4.183s 00:07:40.822 sys 0m0.704s 00:07:40.822 ************************************ 00:07:40.822 END TEST default_locks 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.822 14:13:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.822 ************************************ 00:07:40.822 14:13:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:40.822 14:13:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.822 14:13:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.822 14:13:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.822 ************************************ 00:07:40.822 START TEST default_locks_via_rpc 00:07:40.822 ************************************ 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60981 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60981 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60981 ']' 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.822 14:13:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.822 [2024-12-10 14:13:05.560139] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:40.822 [2024-12-10 14:13:05.560434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:07:41.081 [2024-12-10 14:13:05.743611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.081 [2024-12-10 14:13:05.855324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60981 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60981 00:07:42.018 14:13:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.585 14:13:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60981 00:07:42.585 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60981 ']' 00:07:42.585 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60981 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60981 00:07:42.586 killing process with pid 60981 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60981' 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60981 00:07:42.586 14:13:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60981 00:07:45.119 ************************************ 00:07:45.119 END TEST default_locks_via_rpc 00:07:45.119 ************************************ 00:07:45.119 00:07:45.119 real 0m4.245s 00:07:45.119 user 0m4.227s 00:07:45.119 sys 0m0.697s 00:07:45.119 14:13:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.119 14:13:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.119 14:13:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:45.119 14:13:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.119 14:13:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.119 14:13:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.119 ************************************ 00:07:45.119 START TEST non_locking_app_on_locked_coremask 00:07:45.119 ************************************ 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61061 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61061 /var/tmp/spdk.sock 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61061 ']' 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.119 14:13:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.119 [2024-12-10 14:13:09.886024] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:45.119 [2024-12-10 14:13:09.886378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61061 ] 00:07:45.378 [2024-12-10 14:13:10.069155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.378 [2024-12-10 14:13:10.186972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61078 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61078 /var/tmp/spdk2.sock 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61078 ']' 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.313 14:13:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.572 [2024-12-10 14:13:11.149025] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:46.572 [2024-12-10 14:13:11.149600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61078 ] 00:07:46.572 [2024-12-10 14:13:11.334312] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.572 [2024-12-10 14:13:11.334361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.831 [2024-12-10 14:13:11.568111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.366 14:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.366 14:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:49.366 14:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61061 00:07:49.366 14:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61061 00:07:49.366 14:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61061 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61061 ']' 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61061 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61061 00:07:49.932 killing process with pid 61061 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61061' 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61061 00:07:49.932 14:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61061 00:07:55.199 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61078 00:07:55.199 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61078 ']' 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61078 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61078 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.200 killing process with pid 61078 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61078' 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61078 00:07:55.200 14:13:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61078 00:07:57.106 00:07:57.106 real 0m12.056s 00:07:57.106 user 0m12.356s 00:07:57.106 sys 0m1.495s 00:07:57.106 14:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.106 ************************************ 00:07:57.106 END TEST non_locking_app_on_locked_coremask 00:07:57.106 ************************************ 00:07:57.106 14:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.106 14:13:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:57.106 14:13:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.106 14:13:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.106 14:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.106 ************************************ 00:07:57.106 START TEST locking_app_on_unlocked_coremask 00:07:57.106 ************************************ 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61234 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61234 /var/tmp/spdk.sock 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61234 ']' 00:07:57.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.106 14:13:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.365 [2024-12-10 14:13:22.022612] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:57.365 [2024-12-10 14:13:22.022767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61234 ] 00:07:57.624 [2024-12-10 14:13:22.203994] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:57.624 [2024-12-10 14:13:22.204044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.624 [2024-12-10 14:13:22.310612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.561 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61250 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61250 /var/tmp/spdk2.sock 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61250 ']' 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.562 14:13:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.562 [2024-12-10 14:13:23.235023] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:07:58.562 [2024-12-10 14:13:23.235364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61250 ] 00:07:58.821 [2024-12-10 14:13:23.418070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.821 [2024-12-10 14:13:23.645445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.358 14:13:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.358 14:13:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.358 14:13:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61250 00:08:01.358 14:13:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61250 00:08:01.358 14:13:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61234 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61234 ']' 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61234 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61234 00:08:01.927 killing process with pid 61234 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61234' 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61234 00:08:01.927 14:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61234 00:08:07.200 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61250 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61250 ']' 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61250 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61250 00:08:07.201 killing process with pid 61250 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61250' 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61250 00:08:07.201 14:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61250 00:08:09.106 00:08:09.106 real 0m11.920s 00:08:09.106 user 0m12.179s 00:08:09.106 sys 0m1.452s 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.106 ************************************ 00:08:09.106 END TEST locking_app_on_unlocked_coremask 00:08:09.106 ************************************ 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 14:13:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:09.106 14:13:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.106 14:13:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.106 14:13:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 ************************************ 00:08:09.106 START TEST locking_app_on_locked_coremask 00:08:09.106 ************************************ 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61407 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61407 /var/tmp/spdk.sock 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61407 ']' 00:08:09.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.106 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.107 14:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.365 [2024-12-10 14:13:34.031148] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:09.365 [2024-12-10 14:13:34.031264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:08:09.624 [2024-12-10 14:13:34.211905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.624 [2024-12-10 14:13:34.325207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61424 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61424 /var/tmp/spdk2.sock 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61424 /var/tmp/spdk2.sock 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61424 /var/tmp/spdk2.sock 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61424 ']' 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.559 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.559 [2024-12-10 14:13:35.283814] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:10.559 [2024-12-10 14:13:35.283942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:08:10.817 [2024-12-10 14:13:35.465188] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61407 has claimed it. 00:08:10.817 [2024-12-10 14:13:35.465255] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.385 ERROR: process (pid: 61424) is no longer running 00:08:11.385 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61424) - No such process 00:08:11.385 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.385 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:11.385 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:11.385 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.386 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.386 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.386 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61407 00:08:11.386 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61407 00:08:11.386 14:13:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61407 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61407 ']' 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61407 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61407 00:08:11.645 killing process with pid 61407 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61407' 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61407 00:08:11.645 14:13:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61407 00:08:14.184 00:08:14.184 real 0m4.933s 00:08:14.184 user 0m5.112s 00:08:14.184 sys 0m0.918s 00:08:14.184 14:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.184 14:13:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.184 ************************************ 00:08:14.184 END TEST locking_app_on_locked_coremask 00:08:14.184 ************************************ 00:08:14.184 14:13:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:14.184 14:13:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.184 14:13:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.185 14:13:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.185 ************************************ 00:08:14.185 START TEST locking_overlapped_coremask 00:08:14.185 ************************************ 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61492 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61492 /var/tmp/spdk.sock 00:08:14.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61492 ']' 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.185 14:13:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.444 [2024-12-10 14:13:39.039515] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:14.444 [2024-12-10 14:13:39.039648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61492 ] 00:08:14.444 [2024-12-10 14:13:39.219892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.703 [2024-12-10 14:13:39.338333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.703 [2024-12-10 14:13:39.338396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.703 [2024-12-10 14:13:39.338429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61516 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61516 /var/tmp/spdk2.sock 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61516 /var/tmp/spdk2.sock 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61516 /var/tmp/spdk2.sock 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61516 ']' 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.640 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.640 [2024-12-10 14:13:40.337206] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:15.640 [2024-12-10 14:13:40.337326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61516 ] 00:08:15.900 [2024-12-10 14:13:40.522889] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61492 has claimed it. 00:08:15.900 [2024-12-10 14:13:40.522945] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:16.159 ERROR: process (pid: 61516) is no longer running 00:08:16.159 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61516) - No such process 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61492 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61492 ']' 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61492 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.159 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61492 00:08:16.418 killing process with pid 61492 00:08:16.418 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.418 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.418 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61492' 00:08:16.418 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61492 00:08:16.418 14:13:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61492 00:08:18.977 00:08:18.977 real 0m4.484s 00:08:18.977 user 0m12.141s 00:08:18.977 sys 0m0.626s 00:08:18.977 ************************************ 00:08:18.977 END TEST locking_overlapped_coremask 00:08:18.977 ************************************ 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.977 14:13:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:18.977 14:13:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.977 14:13:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.977 14:13:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.977 ************************************ 00:08:18.977 START TEST locking_overlapped_coremask_via_rpc 00:08:18.977 ************************************ 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61580 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61580 /var/tmp/spdk.sock 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61580 ']' 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.977 14:13:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.977 [2024-12-10 14:13:43.600098] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:18.977 [2024-12-10 14:13:43.600402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:08:18.977 [2024-12-10 14:13:43.781510] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:18.977 [2024-12-10 14:13:43.781697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.237 [2024-12-10 14:13:43.895534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.237 [2024-12-10 14:13:43.895572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.237 [2024-12-10 14:13:43.895580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61604 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61604 /var/tmp/spdk2.sock 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61604 ']' 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:20.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.174 14:13:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.174 [2024-12-10 14:13:44.877120] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:20.174 [2024-12-10 14:13:44.877487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61604 ] 00:08:20.434 [2024-12-10 14:13:45.063144] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.434 [2024-12-10 14:13:45.063213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.694 [2024-12-10 14:13:45.372142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.694 [2024-12-10 14:13:45.372283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.694 [2024-12-10 14:13:45.372315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:23.226 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.227 [2024-12-10 14:13:47.494896] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61580 has claimed it. 00:08:23.227 request: 00:08:23.227 { 00:08:23.227 "method": "framework_enable_cpumask_locks", 00:08:23.227 "req_id": 1 00:08:23.227 } 00:08:23.227 Got JSON-RPC error response 00:08:23.227 response: 00:08:23.227 { 00:08:23.227 "code": -32603, 00:08:23.227 "message": "Failed to claim CPU core: 2" 00:08:23.227 } 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61580 /var/tmp/spdk.sock 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61580 ']' 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61604 /var/tmp/spdk2.sock 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61604 ']' 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:23.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.227 14:13:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:23.227 ************************************ 00:08:23.227 END TEST locking_overlapped_coremask_via_rpc 00:08:23.227 ************************************ 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:23.227 00:08:23.227 real 0m4.510s 00:08:23.227 user 0m1.375s 00:08:23.227 sys 0m0.244s 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.227 14:13:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.485 14:13:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:23.485 14:13:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61580 ]] 00:08:23.485 14:13:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61580 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61580 ']' 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61580 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61580 00:08:23.485 killing process with pid 61580 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61580' 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61580 00:08:23.485 14:13:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61580 00:08:26.016 14:13:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61604 ]] 00:08:26.016 14:13:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61604 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61604 ']' 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61604 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61604 00:08:26.016 killing process with pid 61604 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61604' 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61604 00:08:26.016 14:13:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61604 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61580 ]] 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61580 00:08:29.299 Process with pid 61580 is not found 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61580 ']' 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61580 00:08:29.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61580) - No such process 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61580 is not found' 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61604 ]] 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61604 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61604 ']' 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61604 00:08:29.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61604) - No such process 00:08:29.299 Process with pid 61604 is not found 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61604 is not found' 00:08:29.299 14:13:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.299 ************************************ 00:08:29.299 END TEST cpu_locks 00:08:29.299 ************************************ 00:08:29.299 00:08:29.299 real 0m52.586s 00:08:29.299 user 1m29.851s 00:08:29.299 sys 0m7.646s 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.299 14:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.299 00:08:29.299 real 1m24.746s 00:08:29.299 user 2m34.201s 00:08:29.299 sys 0m12.360s 00:08:29.299 14:13:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.299 14:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.299 ************************************ 00:08:29.299 END TEST event 00:08:29.299 ************************************ 00:08:29.299 14:13:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:29.299 14:13:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.299 14:13:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.299 14:13:53 -- common/autotest_common.sh@10 -- # set +x 00:08:29.299 ************************************ 00:08:29.299 START TEST thread 00:08:29.299 ************************************ 00:08:29.299 14:13:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:29.299 * Looking for test storage... 00:08:29.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:29.299 14:13:53 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.299 14:13:53 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.299 14:13:53 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.299 14:13:53 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.300 14:13:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.300 14:13:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.300 14:13:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.300 14:13:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.300 14:13:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.300 14:13:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.300 14:13:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.300 14:13:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.300 14:13:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.300 14:13:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.300 14:13:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.300 14:13:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:29.300 14:13:53 thread -- scripts/common.sh@345 -- # : 1 00:08:29.300 14:13:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.300 14:13:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.300 14:13:53 thread -- scripts/common.sh@365 -- # decimal 1 00:08:29.300 14:13:53 thread -- scripts/common.sh@353 -- # local d=1 00:08:29.300 14:13:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.300 14:13:53 thread -- scripts/common.sh@355 -- # echo 1 00:08:29.300 14:13:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.300 14:13:53 thread -- scripts/common.sh@366 -- # decimal 2 00:08:29.300 14:13:53 thread -- scripts/common.sh@353 -- # local d=2 00:08:29.300 14:13:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.300 14:13:53 thread -- scripts/common.sh@355 -- # echo 2 00:08:29.300 14:13:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.300 14:13:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.300 14:13:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.300 14:13:53 thread -- scripts/common.sh@368 -- # return 0 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.300 --rc genhtml_branch_coverage=1 00:08:29.300 --rc genhtml_function_coverage=1 00:08:29.300 --rc genhtml_legend=1 00:08:29.300 --rc geninfo_all_blocks=1 00:08:29.300 --rc geninfo_unexecuted_blocks=1 00:08:29.300 00:08:29.300 ' 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.300 --rc genhtml_branch_coverage=1 00:08:29.300 --rc genhtml_function_coverage=1 00:08:29.300 --rc genhtml_legend=1 00:08:29.300 --rc geninfo_all_blocks=1 00:08:29.300 --rc geninfo_unexecuted_blocks=1 00:08:29.300 00:08:29.300 ' 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.300 --rc genhtml_branch_coverage=1 00:08:29.300 --rc genhtml_function_coverage=1 00:08:29.300 --rc genhtml_legend=1 00:08:29.300 --rc geninfo_all_blocks=1 00:08:29.300 --rc geninfo_unexecuted_blocks=1 00:08:29.300 00:08:29.300 ' 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.300 --rc genhtml_branch_coverage=1 00:08:29.300 --rc genhtml_function_coverage=1 00:08:29.300 --rc genhtml_legend=1 00:08:29.300 --rc geninfo_all_blocks=1 00:08:29.300 --rc geninfo_unexecuted_blocks=1 00:08:29.300 00:08:29.300 ' 00:08:29.300 14:13:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.300 14:13:53 thread -- common/autotest_common.sh@10 -- # set +x 00:08:29.300 ************************************ 00:08:29.300 START TEST thread_poller_perf 00:08:29.300 ************************************ 00:08:29.300 14:13:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:29.300 [2024-12-10 14:13:53.944053] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:29.300 [2024-12-10 14:13:53.945107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61806 ] 00:08:29.559 [2024-12-10 14:13:54.151477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.559 [2024-12-10 14:13:54.293002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.559 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:30.972 [2024-12-10T14:13:55.806Z] ====================================== 00:08:30.972 [2024-12-10T14:13:55.806Z] busy:2501086754 (cyc) 00:08:30.972 [2024-12-10T14:13:55.806Z] total_run_count: 388000 00:08:30.972 [2024-12-10T14:13:55.806Z] tsc_hz: 2490000000 (cyc) 00:08:30.972 [2024-12-10T14:13:55.806Z] ====================================== 00:08:30.972 [2024-12-10T14:13:55.806Z] poller_cost: 6446 (cyc), 2588 (nsec) 00:08:30.972 00:08:30.972 real 0m1.661s 00:08:30.972 user 0m1.409s 00:08:30.972 sys 0m0.143s 00:08:30.972 ************************************ 00:08:30.972 END TEST thread_poller_perf 00:08:30.972 ************************************ 00:08:30.972 14:13:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.972 14:13:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:30.972 14:13:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:30.972 14:13:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:30.972 14:13:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.973 14:13:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.973 ************************************ 00:08:30.973 START TEST thread_poller_perf 00:08:30.973 ************************************ 00:08:30.973 14:13:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:30.973 [2024-12-10 14:13:55.685381] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:30.973 [2024-12-10 14:13:55.685749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:08:31.232 [2024-12-10 14:13:55.868978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.232 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:31.232 [2024-12-10 14:13:56.015550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.610 [2024-12-10T14:13:57.444Z] ====================================== 00:08:32.610 [2024-12-10T14:13:57.444Z] busy:2495229772 (cyc) 00:08:32.610 [2024-12-10T14:13:57.444Z] total_run_count: 4730000 00:08:32.610 [2024-12-10T14:13:57.444Z] tsc_hz: 2490000000 (cyc) 00:08:32.610 [2024-12-10T14:13:57.444Z] ====================================== 00:08:32.610 [2024-12-10T14:13:57.444Z] poller_cost: 527 (cyc), 211 (nsec) 00:08:32.610 00:08:32.610 real 0m1.634s 00:08:32.610 user 0m1.410s 00:08:32.610 sys 0m0.115s 00:08:32.610 14:13:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.610 14:13:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.610 ************************************ 00:08:32.610 END TEST thread_poller_perf 00:08:32.610 ************************************ 00:08:32.610 14:13:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:32.610 ************************************ 00:08:32.610 END TEST thread 00:08:32.610 ************************************ 00:08:32.610 00:08:32.610 real 0m3.689s 00:08:32.610 user 0m2.998s 00:08:32.610 sys 0m0.473s 00:08:32.610 14:13:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.610 14:13:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.610 14:13:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:32.610 14:13:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:32.610 14:13:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.610 14:13:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.610 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:08:32.610 ************************************ 00:08:32.610 START TEST app_cmdline 00:08:32.610 ************************************ 00:08:32.610 14:13:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:32.869 * Looking for test storage... 00:08:32.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.869 14:13:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:32.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.869 --rc genhtml_branch_coverage=1 00:08:32.869 --rc genhtml_function_coverage=1 00:08:32.869 --rc genhtml_legend=1 00:08:32.869 --rc geninfo_all_blocks=1 00:08:32.869 --rc geninfo_unexecuted_blocks=1 00:08:32.869 00:08:32.869 ' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:32.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.869 --rc genhtml_branch_coverage=1 00:08:32.869 --rc genhtml_function_coverage=1 00:08:32.869 --rc genhtml_legend=1 00:08:32.869 --rc geninfo_all_blocks=1 00:08:32.869 --rc geninfo_unexecuted_blocks=1 00:08:32.869 00:08:32.869 ' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:32.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.869 --rc genhtml_branch_coverage=1 00:08:32.869 --rc genhtml_function_coverage=1 00:08:32.869 --rc genhtml_legend=1 00:08:32.869 --rc geninfo_all_blocks=1 00:08:32.869 --rc geninfo_unexecuted_blocks=1 00:08:32.869 00:08:32.869 ' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:32.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.869 --rc genhtml_branch_coverage=1 00:08:32.869 --rc genhtml_function_coverage=1 00:08:32.869 --rc genhtml_legend=1 00:08:32.869 --rc geninfo_all_blocks=1 00:08:32.869 --rc geninfo_unexecuted_blocks=1 00:08:32.869 00:08:32.869 ' 00:08:32.869 14:13:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:32.869 14:13:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61937 00:08:32.869 14:13:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:32.869 14:13:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61937 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61937 ']' 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.869 14:13:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.128 [2024-12-10 14:13:57.803739] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:33.128 [2024-12-10 14:13:57.804003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:08:33.388 [2024-12-10 14:13:57.986517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.388 [2024-12-10 14:13:58.115211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.325 14:13:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.325 14:13:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:34.325 14:13:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:34.584 { 00:08:34.584 "version": "SPDK v25.01-pre git sha1 4cd130da1", 00:08:34.584 "fields": { 00:08:34.584 "major": 25, 00:08:34.584 "minor": 1, 00:08:34.584 "patch": 0, 00:08:34.584 "suffix": "-pre", 00:08:34.584 "commit": "4cd130da1" 00:08:34.584 } 00:08:34.584 } 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:34.584 14:13:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:34.584 14:13:59 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.843 request: 00:08:34.843 { 00:08:34.843 "method": "env_dpdk_get_mem_stats", 00:08:34.843 "req_id": 1 00:08:34.843 } 00:08:34.843 Got JSON-RPC error response 00:08:34.843 response: 00:08:34.843 { 00:08:34.843 "code": -32601, 00:08:34.843 "message": "Method not found" 00:08:34.843 } 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.843 14:13:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61937 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61937 ']' 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61937 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61937 00:08:34.843 killing process with pid 61937 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61937' 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 61937 00:08:34.843 14:13:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 61937 00:08:38.152 00:08:38.152 real 0m4.874s 00:08:38.152 user 0m4.870s 00:08:38.152 sys 0m0.866s 00:08:38.152 14:14:02 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.152 ************************************ 00:08:38.152 END TEST app_cmdline 00:08:38.153 ************************************ 00:08:38.153 14:14:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.153 14:14:02 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.153 14:14:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.153 14:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.153 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.153 ************************************ 00:08:38.153 START TEST version 00:08:38.153 ************************************ 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.153 * Looking for test storage... 00:08:38.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.153 14:14:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.153 14:14:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.153 14:14:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.153 14:14:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.153 14:14:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.153 14:14:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.153 14:14:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.153 14:14:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.153 14:14:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.153 14:14:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.153 14:14:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.153 14:14:02 version -- scripts/common.sh@344 -- # case "$op" in 00:08:38.153 14:14:02 version -- scripts/common.sh@345 -- # : 1 00:08:38.153 14:14:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.153 14:14:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.153 14:14:02 version -- scripts/common.sh@365 -- # decimal 1 00:08:38.153 14:14:02 version -- scripts/common.sh@353 -- # local d=1 00:08:38.153 14:14:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.153 14:14:02 version -- scripts/common.sh@355 -- # echo 1 00:08:38.153 14:14:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.153 14:14:02 version -- scripts/common.sh@366 -- # decimal 2 00:08:38.153 14:14:02 version -- scripts/common.sh@353 -- # local d=2 00:08:38.153 14:14:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.153 14:14:02 version -- scripts/common.sh@355 -- # echo 2 00:08:38.153 14:14:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.153 14:14:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.153 14:14:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.153 14:14:02 version -- scripts/common.sh@368 -- # return 0 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 14:14:02 version -- app/version.sh@17 -- # get_header_version major 00:08:38.153 14:14:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # cut -f2 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.153 14:14:02 version -- app/version.sh@17 -- # major=25 00:08:38.153 14:14:02 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.153 14:14:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # cut -f2 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.153 14:14:02 version -- app/version.sh@18 -- # minor=1 00:08:38.153 14:14:02 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.153 14:14:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # cut -f2 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.153 14:14:02 version -- app/version.sh@19 -- # patch=0 00:08:38.153 14:14:02 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # cut -f2 00:08:38.153 14:14:02 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.153 14:14:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.153 14:14:02 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.153 14:14:02 version -- app/version.sh@22 -- # version=25.1 00:08:38.153 14:14:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.153 14:14:02 version -- app/version.sh@28 -- # version=25.1rc0 00:08:38.153 14:14:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:38.153 14:14:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.153 14:14:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:38.153 14:14:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:38.153 ************************************ 00:08:38.153 END TEST version 00:08:38.153 ************************************ 00:08:38.153 00:08:38.153 real 0m0.344s 00:08:38.153 user 0m0.202s 00:08:38.153 sys 0m0.199s 00:08:38.153 14:14:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.153 14:14:02 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.153 14:14:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:38.153 14:14:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:38.153 14:14:02 -- spdk/autotest.sh@194 -- # uname -s 00:08:38.153 14:14:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:38.153 14:14:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.153 14:14:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.153 14:14:02 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:38.153 14:14:02 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:38.153 14:14:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.153 14:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.153 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.153 ************************************ 00:08:38.153 START TEST blockdev_nvme 00:08:38.153 ************************************ 00:08:38.153 14:14:02 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:38.153 * Looking for test storage... 00:08:38.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:38.153 14:14:02 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.153 14:14:02 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.153 14:14:02 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.412 14:14:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.412 14:14:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.412 14:14:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.412 14:14:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.413 14:14:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.413 --rc genhtml_branch_coverage=1 00:08:38.413 --rc genhtml_function_coverage=1 00:08:38.413 --rc genhtml_legend=1 00:08:38.413 --rc geninfo_all_blocks=1 00:08:38.413 --rc geninfo_unexecuted_blocks=1 00:08:38.413 00:08:38.413 ' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.413 --rc genhtml_branch_coverage=1 00:08:38.413 --rc genhtml_function_coverage=1 00:08:38.413 --rc genhtml_legend=1 00:08:38.413 --rc geninfo_all_blocks=1 00:08:38.413 --rc geninfo_unexecuted_blocks=1 00:08:38.413 00:08:38.413 ' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.413 --rc genhtml_branch_coverage=1 00:08:38.413 --rc genhtml_function_coverage=1 00:08:38.413 --rc genhtml_legend=1 00:08:38.413 --rc geninfo_all_blocks=1 00:08:38.413 --rc geninfo_unexecuted_blocks=1 00:08:38.413 00:08:38.413 ' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.413 --rc genhtml_branch_coverage=1 00:08:38.413 --rc genhtml_function_coverage=1 00:08:38.413 --rc genhtml_legend=1 00:08:38.413 --rc geninfo_all_blocks=1 00:08:38.413 --rc geninfo_unexecuted_blocks=1 00:08:38.413 00:08:38.413 ' 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:38.413 14:14:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62131 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 62131 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 62131 ']' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.413 14:14:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.413 14:14:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.413 [2024-12-10 14:14:03.165129] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:38.413 [2024-12-10 14:14:03.165594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62131 ] 00:08:38.673 [2024-12-10 14:14:03.348209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.673 [2024-12-10 14:14:03.485967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.050 14:14:04 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.050 14:14:04 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.050 14:14:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:40.050 14:14:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.050 14:14:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:04 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:40.310 14:14:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.310 14:14:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:40.310 14:14:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:40.310 14:14:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:40.310 14:14:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.310 14:14:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.310 14:14:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.310 14:14:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:40.310 14:14:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:40.311 14:14:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "15382fa9-2ef0-41f5-8b4c-86c5b9581ca2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "15382fa9-2ef0-41f5-8b4c-86c5b9581ca2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4554ac25-024e-41b0-b84c-763e2a24101f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4554ac25-024e-41b0-b84c-763e2a24101f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "eaa6ae07-5493-416c-810f-7078eb0f6c2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eaa6ae07-5493-416c-810f-7078eb0f6c2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bfc836b3-969b-4fad-9691-d26468487dd3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bfc836b3-969b-4fad-9691-d26468487dd3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a9c6f1d0-bbb1-4fef-bf95-eb3b4f54a411"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9c6f1d0-bbb1-4fef-bf95-eb3b4f54a411",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f60af3be-6eaf-4aae-a8d0-50da94826c45"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f60af3be-6eaf-4aae-a8d0-50da94826c45",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:40.570 14:14:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:40.570 14:14:05 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:40.570 14:14:05 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:40.570 14:14:05 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 62131 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 62131 ']' 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 62131 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62131 00:08:40.570 killing process with pid 62131 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62131' 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 62131 00:08:40.570 14:14:05 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 62131 00:08:43.107 14:14:07 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.107 14:14:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.107 14:14:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:43.107 14:14:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.107 14:14:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.107 ************************************ 00:08:43.107 START TEST bdev_hello_world 00:08:43.107 ************************************ 00:08:43.107 14:14:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.107 [2024-12-10 14:14:07.872408] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:43.107 [2024-12-10 14:14:07.872534] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62226 ] 00:08:43.366 [2024-12-10 14:14:08.059170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.366 [2024-12-10 14:14:08.194222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.304 [2024-12-10 14:14:08.903347] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:44.304 [2024-12-10 14:14:08.903404] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:44.304 [2024-12-10 14:14:08.903428] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:44.304 [2024-12-10 14:14:08.906585] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:44.304 [2024-12-10 14:14:08.907254] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:44.304 [2024-12-10 14:14:08.907291] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:44.304 [2024-12-10 14:14:08.907516] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:44.304 00:08:44.304 [2024-12-10 14:14:08.907542] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:45.684 ************************************ 00:08:45.684 END TEST bdev_hello_world 00:08:45.684 ************************************ 00:08:45.684 00:08:45.684 real 0m2.342s 00:08:45.684 user 0m1.912s 00:08:45.684 sys 0m0.323s 00:08:45.684 14:14:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.684 14:14:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:45.684 14:14:10 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:45.684 14:14:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.684 14:14:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.684 14:14:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.684 ************************************ 00:08:45.684 START TEST bdev_bounds 00:08:45.684 ************************************ 00:08:45.684 Process bdevio pid: 62274 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62274 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62274' 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62274 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62274 ']' 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.684 14:14:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:45.684 [2024-12-10 14:14:10.300654] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:45.684 [2024-12-10 14:14:10.301075] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62274 ] 00:08:45.684 [2024-12-10 14:14:10.491101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.943 [2024-12-10 14:14:10.632114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.943 [2024-12-10 14:14:10.632262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.943 [2024-12-10 14:14:10.632291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.881 14:14:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.881 14:14:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:46.882 14:14:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:46.882 I/O targets: 00:08:46.882 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:46.882 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:46.882 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.882 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.882 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.882 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:46.882 00:08:46.882 00:08:46.882 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.882 http://cunit.sourceforge.net/ 00:08:46.882 00:08:46.882 00:08:46.882 Suite: bdevio tests on: Nvme3n1 00:08:46.882 Test: blockdev write read block ...passed 00:08:46.882 Test: blockdev write zeroes read block ...passed 00:08:46.882 Test: blockdev write zeroes read no split ...passed 00:08:46.882 Test: blockdev write zeroes read split ...passed 00:08:46.882 Test: blockdev write zeroes read split partial ...passed 00:08:46.882 Test: blockdev reset ...[2024-12-10 14:14:11.544927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:46.882 passed 00:08:46.882 Test: blockdev write read 8 blocks ...[2024-12-10 14:14:11.549257] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:46.882 passed 00:08:46.882 Test: blockdev write read size > 128k ...passed 00:08:46.882 Test: blockdev write read invalid size ...passed 00:08:46.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.882 Test: blockdev write read max offset ...passed 00:08:46.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.882 Test: blockdev writev readv 8 blocks ...passed 00:08:46.882 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.882 Test: blockdev writev readv block ...passed 00:08:46.882 Test: blockdev writev readv size > 128k ...passed 00:08:46.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.882 Test: blockdev comparev and writev ...[2024-12-10 14:14:11.559271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e0a000 len:0x1000 00:08:46.882 [2024-12-10 14:14:11.559333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.882 passed 00:08:46.882 Test: blockdev nvme passthru rw ...passed 00:08:46.882 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.882 Test: blockdev nvme admin passthru ...[2024-12-10 14:14:11.560380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.882 [2024-12-10 14:14:11.560423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.882 passed 00:08:46.882 Test: blockdev copy ...passed 00:08:46.882 Suite: bdevio tests on: Nvme2n3 00:08:46.882 Test: blockdev write read block ...passed 00:08:46.882 Test: blockdev write zeroes read block ...passed 00:08:46.882 Test: blockdev write zeroes read no split ...passed 00:08:46.882 Test: blockdev write zeroes read split ...passed 00:08:46.882 Test: blockdev write zeroes read split partial ...passed 00:08:46.882 Test: blockdev reset ...[2024-12-10 14:14:11.637631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:46.882 [2024-12-10 14:14:11.642181] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:46.882 Test: blockdev write read 8 blocks ...uccessful. 00:08:46.882 passed 00:08:46.882 Test: blockdev write read size > 128k ...passed 00:08:46.882 Test: blockdev write read invalid size ...passed 00:08:46.882 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.882 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.882 Test: blockdev write read max offset ...passed 00:08:46.882 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.882 Test: blockdev writev readv 8 blocks ...passed 00:08:46.882 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.882 Test: blockdev writev readv block ...passed 00:08:46.882 Test: blockdev writev readv size > 128k ...passed 00:08:46.882 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.882 Test: blockdev comparev and writev ...[2024-12-10 14:14:11.653087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298806000 len:0x1000 00:08:46.882 [2024-12-10 14:14:11.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.882 passed 00:08:46.882 Test: blockdev nvme passthru rw ...passed 00:08:46.882 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.882 Test: blockdev nvme admin passthru ...[2024-12-10 14:14:11.654052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.882 [2024-12-10 14:14:11.654091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.882 passed 00:08:46.882 Test: blockdev copy ...passed 00:08:46.882 Suite: bdevio tests on: Nvme2n2 00:08:46.882 Test: blockdev write read block ...passed 00:08:46.882 Test: blockdev write zeroes read block ...passed 00:08:46.882 Test: blockdev write zeroes read no split ...passed 00:08:46.882 Test: blockdev write zeroes read split ...passed 00:08:47.142 Test: blockdev write zeroes read split partial ...passed 00:08:47.142 Test: blockdev reset ...[2024-12-10 14:14:11.732115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:47.142 passed 00:08:47.142 Test: blockdev write read 8 blocks ...[2024-12-10 14:14:11.736366] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:47.142 passed 00:08:47.142 Test: blockdev write read size > 128k ...passed 00:08:47.142 Test: blockdev write read invalid size ...passed 00:08:47.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.142 Test: blockdev write read max offset ...passed 00:08:47.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.142 Test: blockdev writev readv 8 blocks ...passed 00:08:47.142 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.142 Test: blockdev writev readv block ...passed 00:08:47.142 Test: blockdev writev readv size > 128k ...passed 00:08:47.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.142 Test: blockdev comparev and writev ...[2024-12-10 14:14:11.745811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e3c000 len:0x1000 00:08:47.142 [2024-12-10 14:14:11.745866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.142 passed 00:08:47.142 Test: blockdev nvme passthru rw ...passed 00:08:47.142 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.142 Test: blockdev nvme admin passthru ...[2024-12-10 14:14:11.746847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.142 [2024-12-10 14:14:11.746885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.142 passed 00:08:47.142 Test: blockdev copy ...passed 00:08:47.142 Suite: bdevio tests on: Nvme2n1 00:08:47.142 Test: blockdev write read block ...passed 00:08:47.142 Test: blockdev write zeroes read block ...passed 00:08:47.142 Test: blockdev write zeroes read no split ...passed 00:08:47.142 Test: blockdev write zeroes read split ...passed 00:08:47.142 Test: blockdev write zeroes read split partial ...passed 00:08:47.142 Test: blockdev reset ...[2024-12-10 14:14:11.825898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:47.142 [2024-12-10 14:14:11.830096] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:47.142 Test: blockdev write read 8 blocks ...uccessful. 00:08:47.142 passed 00:08:47.142 Test: blockdev write read size > 128k ...passed 00:08:47.142 Test: blockdev write read invalid size ...passed 00:08:47.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.142 Test: blockdev write read max offset ...passed 00:08:47.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.142 Test: blockdev writev readv 8 blocks ...passed 00:08:47.142 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.142 Test: blockdev writev readv block ...passed 00:08:47.142 Test: blockdev writev readv size > 128k ...passed 00:08:47.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.142 Test: blockdev comparev and writev ...[2024-12-10 14:14:11.841246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e38000 len:0x1000 00:08:47.142 [2024-12-10 14:14:11.841421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.142 passed 00:08:47.142 Test: blockdev nvme passthru rw ...passed 00:08:47.142 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:14:11.842787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.142 passed 00:08:47.142 Test: blockdev nvme admin passthru ...[2024-12-10 14:14:11.842949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.142 passed 00:08:47.142 Test: blockdev copy ...passed 00:08:47.142 Suite: bdevio tests on: Nvme1n1 00:08:47.142 Test: blockdev write read block ...passed 00:08:47.142 Test: blockdev write zeroes read block ...passed 00:08:47.142 Test: blockdev write zeroes read no split ...passed 00:08:47.142 Test: blockdev write zeroes read split ...passed 00:08:47.142 Test: blockdev write zeroes read split partial ...passed 00:08:47.142 Test: blockdev reset ...[2024-12-10 14:14:11.919844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:47.142 passed 00:08:47.142 Test: blockdev write read 8 blocks ...[2024-12-10 14:14:11.923729] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:47.142 passed 00:08:47.142 Test: blockdev write read size > 128k ...passed 00:08:47.142 Test: blockdev write read invalid size ...passed 00:08:47.142 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.142 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.142 Test: blockdev write read max offset ...passed 00:08:47.142 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.142 Test: blockdev writev readv 8 blocks ...passed 00:08:47.142 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.142 Test: blockdev writev readv block ...passed 00:08:47.142 Test: blockdev writev readv size > 128k ...passed 00:08:47.142 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.142 Test: blockdev comparev and writev ...[2024-12-10 14:14:11.933920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e34000 len:0x1000 00:08:47.142 [2024-12-10 14:14:11.933967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.142 passed 00:08:47.142 Test: blockdev nvme passthru rw ...passed 00:08:47.143 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:14:11.934886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.143 [2024-12-10 14:14:11.934923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.143 passed 00:08:47.143 Test: blockdev nvme admin passthru ...passed 00:08:47.143 Test: blockdev copy ...passed 00:08:47.143 Suite: bdevio tests on: Nvme0n1 00:08:47.143 Test: blockdev write read block ...passed 00:08:47.143 Test: blockdev write zeroes read block ...passed 00:08:47.143 Test: blockdev write zeroes read no split ...passed 00:08:47.402 Test: blockdev write zeroes read split ...passed 00:08:47.402 Test: blockdev write zeroes read split partial ...passed 00:08:47.402 Test: blockdev reset ...[2024-12-10 14:14:12.011739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:47.402 passed 00:08:47.402 Test: blockdev write read 8 blocks ...[2024-12-10 14:14:12.015463] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:47.402 passed 00:08:47.402 Test: blockdev write read size > 128k ...passed 00:08:47.402 Test: blockdev write read invalid size ...passed 00:08:47.402 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.402 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.402 Test: blockdev write read max offset ...passed 00:08:47.402 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.402 Test: blockdev writev readv 8 blocks ...passed 00:08:47.402 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.402 Test: blockdev writev readv block ...passed 00:08:47.402 Test: blockdev writev readv size > 128k ...passed 00:08:47.402 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.402 Test: blockdev comparev and writev ...passed 00:08:47.402 Test: blockdev nvme passthru rw ...[2024-12-10 14:14:12.024150] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:47.402 separate metadata which is not supported yet. 00:08:47.402 passed 00:08:47.402 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.402 Test: blockdev nvme admin passthru ...[2024-12-10 14:14:12.024796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:47.402 [2024-12-10 14:14:12.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:47.402 passed 00:08:47.402 Test: blockdev copy ...passed 00:08:47.402 00:08:47.402 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.402 suites 6 6 n/a 0 0 00:08:47.402 tests 138 138 138 0 0 00:08:47.402 asserts 893 893 893 0 n/a 00:08:47.402 00:08:47.402 Elapsed time = 1.503 seconds 00:08:47.402 0 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62274 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62274 ']' 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62274 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62274 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62274' 00:08:47.402 killing process with pid 62274 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62274 00:08:47.402 14:14:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62274 00:08:48.340 14:14:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:48.340 00:08:48.340 real 0m2.962s 00:08:48.340 user 0m7.424s 00:08:48.340 sys 0m0.502s 00:08:48.340 14:14:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.340 14:14:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:48.340 ************************************ 00:08:48.340 END TEST bdev_bounds 00:08:48.340 ************************************ 00:08:48.600 14:14:13 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.600 14:14:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.600 14:14:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.600 14:14:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 ************************************ 00:08:48.600 START TEST bdev_nbd 00:08:48.600 ************************************ 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:48.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62339 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62339 /var/tmp/spdk-nbd.sock 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62339 ']' 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.600 14:14:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 [2024-12-10 14:14:13.356408] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:08:48.600 [2024-12-10 14:14:13.356737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.860 [2024-12-10 14:14:13.545126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.860 [2024-12-10 14:14:13.660863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.797 1+0 records in 00:08:49.797 1+0 records out 00:08:49.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616845 s, 6.6 MB/s 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.797 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.056 1+0 records in 00:08:50.056 1+0 records out 00:08:50.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933564 s, 4.4 MB/s 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.056 14:14:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.316 1+0 records in 00:08:50.316 1+0 records out 00:08:50.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611502 s, 6.7 MB/s 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.316 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.575 1+0 records in 00:08:50.575 1+0 records out 00:08:50.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798916 s, 5.1 MB/s 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.575 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.834 1+0 records in 00:08:50.834 1+0 records out 00:08:50.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715565 s, 5.7 MB/s 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.834 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.835 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.094 1+0 records in 00:08:51.094 1+0 records out 00:08:51.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994204 s, 4.1 MB/s 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:51.094 14:14:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.353 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:51.353 { 00:08:51.353 "nbd_device": "/dev/nbd0", 00:08:51.353 "bdev_name": "Nvme0n1" 00:08:51.353 }, 00:08:51.353 { 00:08:51.353 "nbd_device": "/dev/nbd1", 00:08:51.353 "bdev_name": "Nvme1n1" 00:08:51.353 }, 00:08:51.353 { 00:08:51.353 "nbd_device": "/dev/nbd2", 00:08:51.353 "bdev_name": "Nvme2n1" 00:08:51.353 }, 00:08:51.353 { 00:08:51.353 "nbd_device": "/dev/nbd3", 00:08:51.353 "bdev_name": "Nvme2n2" 00:08:51.353 }, 00:08:51.353 { 00:08:51.353 "nbd_device": "/dev/nbd4", 00:08:51.353 "bdev_name": "Nvme2n3" 00:08:51.353 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd5", 00:08:51.354 "bdev_name": "Nvme3n1" 00:08:51.354 } 00:08:51.354 ]' 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd0", 00:08:51.354 "bdev_name": "Nvme0n1" 00:08:51.354 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd1", 00:08:51.354 "bdev_name": "Nvme1n1" 00:08:51.354 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd2", 00:08:51.354 "bdev_name": "Nvme2n1" 00:08:51.354 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd3", 00:08:51.354 "bdev_name": "Nvme2n2" 00:08:51.354 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd4", 00:08:51.354 "bdev_name": "Nvme2n3" 00:08:51.354 }, 00:08:51.354 { 00:08:51.354 "nbd_device": "/dev/nbd5", 00:08:51.354 "bdev_name": "Nvme3n1" 00:08:51.354 } 00:08:51.354 ]' 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.354 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.613 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.872 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.131 14:14:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.389 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.649 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.908 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.168 14:14:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:53.427 /dev/nbd0 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.427 1+0 records in 00:08:53.427 1+0 records out 00:08:53.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694089 s, 5.9 MB/s 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.427 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:53.686 /dev/nbd1 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:53.686 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.687 1+0 records in 00:08:53.687 1+0 records out 00:08:53.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729746 s, 5.6 MB/s 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.687 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:53.946 /dev/nbd10 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.946 1+0 records in 00:08:53.946 1+0 records out 00:08:53.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554874 s, 7.4 MB/s 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:53.946 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.947 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.947 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:54.206 /dev/nbd11 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.206 1+0 records in 00:08:54.206 1+0 records out 00:08:54.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083175 s, 4.9 MB/s 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.206 14:14:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:54.206 /dev/nbd12 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.465 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.466 1+0 records in 00:08:54.466 1+0 records out 00:08:54.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103121 s, 4.0 MB/s 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.466 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:54.725 /dev/nbd13 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.725 1+0 records in 00:08:54.725 1+0 records out 00:08:54.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922673 s, 4.4 MB/s 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.725 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd0", 00:08:54.985 "bdev_name": "Nvme0n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd1", 00:08:54.985 "bdev_name": "Nvme1n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd10", 00:08:54.985 "bdev_name": "Nvme2n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd11", 00:08:54.985 "bdev_name": "Nvme2n2" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd12", 00:08:54.985 "bdev_name": "Nvme2n3" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd13", 00:08:54.985 "bdev_name": "Nvme3n1" 00:08:54.985 } 00:08:54.985 ]' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd0", 00:08:54.985 "bdev_name": "Nvme0n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd1", 00:08:54.985 "bdev_name": "Nvme1n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd10", 00:08:54.985 "bdev_name": "Nvme2n1" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd11", 00:08:54.985 "bdev_name": "Nvme2n2" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd12", 00:08:54.985 "bdev_name": "Nvme2n3" 00:08:54.985 }, 00:08:54.985 { 00:08:54.985 "nbd_device": "/dev/nbd13", 00:08:54.985 "bdev_name": "Nvme3n1" 00:08:54.985 } 00:08:54.985 ]' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:54.985 /dev/nbd1 00:08:54.985 /dev/nbd10 00:08:54.985 /dev/nbd11 00:08:54.985 /dev/nbd12 00:08:54.985 /dev/nbd13' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:54.985 /dev/nbd1 00:08:54.985 /dev/nbd10 00:08:54.985 /dev/nbd11 00:08:54.985 /dev/nbd12 00:08:54.985 /dev/nbd13' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:54.985 256+0 records in 00:08:54.985 256+0 records out 00:08:54.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124414 s, 84.3 MB/s 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.985 256+0 records in 00:08:54.985 256+0 records out 00:08:54.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130271 s, 8.0 MB/s 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.985 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:55.244 256+0 records in 00:08:55.244 256+0 records out 00:08:55.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133841 s, 7.8 MB/s 00:08:55.244 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.244 14:14:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:55.503 256+0 records in 00:08:55.503 256+0 records out 00:08:55.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133749 s, 7.8 MB/s 00:08:55.503 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.503 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:55.503 256+0 records in 00:08:55.503 256+0 records out 00:08:55.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13247 s, 7.9 MB/s 00:08:55.503 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.503 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:55.762 256+0 records in 00:08:55.762 256+0 records out 00:08:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131599 s, 8.0 MB/s 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:55.762 256+0 records in 00:08:55.762 256+0 records out 00:08:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136788 s, 7.7 MB/s 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:55.762 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.763 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.022 14:14:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.281 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.541 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.800 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.060 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.319 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.320 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.320 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.320 14:14:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.320 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:57.320 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:57.320 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:57.579 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:57.838 malloc_lvol_verify 00:08:57.838 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:58.098 38e7776f-5ab2-42eb-88c8-209205e55036 00:08:58.098 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:58.098 80f7949f-3f54-45be-9a57-5cd091b3b53a 00:08:58.098 14:14:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:58.357 /dev/nbd0 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:58.357 mke2fs 1.47.0 (5-Feb-2023) 00:08:58.357 Discarding device blocks: 0/4096 done 00:08:58.357 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:58.357 00:08:58.357 Allocating group tables: 0/1 done 00:08:58.357 Writing inode tables: 0/1 done 00:08:58.357 Creating journal (1024 blocks): done 00:08:58.357 Writing superblocks and filesystem accounting information: 0/1 done 00:08:58.357 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.357 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62339 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62339 ']' 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62339 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62339 00:08:58.616 killing process with pid 62339 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62339' 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62339 00:08:58.616 14:14:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62339 00:08:59.997 ************************************ 00:08:59.997 END TEST bdev_nbd 00:08:59.997 ************************************ 00:08:59.997 14:14:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:59.997 00:08:59.997 real 0m11.481s 00:08:59.997 user 0m14.790s 00:08:59.997 sys 0m4.772s 00:08:59.997 14:14:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.997 14:14:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:59.997 14:14:24 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:59.997 14:14:24 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:59.997 skipping fio tests on NVMe due to multi-ns failures. 00:08:59.997 14:14:24 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:59.997 14:14:24 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:59.997 14:14:24 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:59.997 14:14:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:59.997 14:14:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.997 14:14:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.997 ************************************ 00:08:59.997 START TEST bdev_verify 00:08:59.997 ************************************ 00:08:59.997 14:14:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:00.256 [2024-12-10 14:14:24.894619] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:00.256 [2024-12-10 14:14:24.894754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62728 ] 00:09:00.256 [2024-12-10 14:14:25.079317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:00.514 [2024-12-10 14:14:25.202035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.514 [2024-12-10 14:14:25.202090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.450 Running I/O for 5 seconds... 00:09:03.321 17472.00 IOPS, 68.25 MiB/s [2024-12-10T14:14:29.532Z] 17536.00 IOPS, 68.50 MiB/s [2024-12-10T14:14:30.469Z] 17813.33 IOPS, 69.58 MiB/s [2024-12-10T14:14:31.405Z] 17712.00 IOPS, 69.19 MiB/s [2024-12-10T14:14:31.405Z] 17728.00 IOPS, 69.25 MiB/s 00:09:06.571 Latency(us) 00:09:06.571 [2024-12-10T14:14:31.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0xbd0bd 00:09:06.571 Nvme0n1 : 5.05 1572.73 6.14 0.00 0.00 81124.33 20950.46 69062.84 00:09:06.571 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:06.571 Nvme0n1 : 5.07 1350.15 5.27 0.00 0.00 94009.30 8632.85 69905.07 00:09:06.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0xa0000 00:09:06.571 Nvme1n1 : 5.05 1572.27 6.14 0.00 0.00 81041.57 22213.81 66957.26 00:09:06.571 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0xa0000 length 0xa0000 00:09:06.571 Nvme1n1 : 5.07 1349.84 5.27 0.00 0.00 93901.22 7369.51 71168.41 00:09:06.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0x80000 00:09:06.571 Nvme2n1 : 5.06 1580.16 6.17 0.00 0.00 80549.40 8685.49 66957.26 00:09:06.571 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x80000 length 0x80000 00:09:06.571 Nvme2n1 : 5.07 1349.51 5.27 0.00 0.00 93779.58 7001.03 72431.76 00:09:06.571 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0x80000 00:09:06.571 Nvme2n2 : 5.06 1579.64 6.17 0.00 0.00 80461.54 9633.00 64851.69 00:09:06.571 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x80000 length 0x80000 00:09:06.571 Nvme2n2 : 5.05 1343.63 5.25 0.00 0.00 94850.11 21266.30 75379.56 00:09:06.571 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0x80000 00:09:06.571 Nvme2n3 : 5.07 1579.10 6.17 0.00 0.00 80345.61 8685.49 67799.49 00:09:06.571 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x80000 length 0x80000 00:09:06.571 Nvme2n3 : 5.07 1351.02 5.28 0.00 0.00 94258.34 8896.05 70747.30 00:09:06.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x0 length 0x20000 00:09:06.571 Nvme3n1 : 5.07 1578.43 6.17 0.00 0.00 80235.44 9475.08 69905.07 00:09:06.571 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:06.571 Verification LBA range: start 0x20000 length 0x20000 00:09:06.571 Nvme3n1 : 5.07 1350.49 5.28 0.00 0.00 94134.07 9422.44 72010.64 00:09:06.571 [2024-12-10T14:14:31.405Z] =================================================================================================================== 00:09:06.571 [2024-12-10T14:14:31.405Z] Total : 17556.98 68.58 0.00 0.00 86868.62 7001.03 75379.56 00:09:07.950 00:09:07.950 real 0m7.775s 00:09:07.950 user 0m14.264s 00:09:07.950 sys 0m0.390s 00:09:07.950 14:14:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.950 14:14:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:07.950 ************************************ 00:09:07.950 END TEST bdev_verify 00:09:07.950 ************************************ 00:09:07.950 14:14:32 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:07.950 14:14:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:07.950 14:14:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.950 14:14:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.950 ************************************ 00:09:07.950 START TEST bdev_verify_big_io 00:09:07.950 ************************************ 00:09:07.950 14:14:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:07.950 [2024-12-10 14:14:32.743199] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:07.950 [2024-12-10 14:14:32.743339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:09:08.209 [2024-12-10 14:14:32.928873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.467 [2024-12-10 14:14:33.063197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.467 [2024-12-10 14:14:33.063226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.438 Running I/O for 5 seconds... 00:09:13.349 1827.00 IOPS, 114.19 MiB/s [2024-12-10T14:14:39.561Z] 3034.50 IOPS, 189.66 MiB/s [2024-12-10T14:14:40.129Z] 2974.67 IOPS, 185.92 MiB/s 00:09:15.295 Latency(us) 00:09:15.295 [2024-12-10T14:14:40.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.295 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0xbd0b 00:09:15.295 Nvme0n1 : 5.33 215.96 13.50 0.00 0.00 578851.02 25161.61 522182.43 00:09:15.295 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:15.295 Nvme0n1 : 5.49 151.68 9.48 0.00 0.00 811366.16 11843.86 1138694.58 00:09:15.295 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0xa000 00:09:15.295 Nvme1n1 : 5.45 214.91 13.43 0.00 0.00 567709.04 64430.57 667045.94 00:09:15.295 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0xa000 length 0xa000 00:09:15.295 Nvme1n1 : 5.53 147.56 9.22 0.00 0.00 802864.77 42111.49 1590129.71 00:09:15.295 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0x8000 00:09:15.295 Nvme2n1 : 5.46 222.18 13.89 0.00 0.00 549372.37 7422.15 683890.53 00:09:15.295 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x8000 length 0x8000 00:09:15.295 Nvme2n1 : 5.59 156.81 9.80 0.00 0.00 736095.45 30320.27 1623818.90 00:09:15.295 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0x8000 00:09:15.295 Nvme2n2 : 5.48 230.79 14.42 0.00 0.00 524269.13 7158.95 683890.53 00:09:15.295 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x8000 length 0x8000 00:09:15.295 Nvme2n2 : 5.69 177.78 11.11 0.00 0.00 628122.67 24424.66 1185859.44 00:09:15.295 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0x8000 00:09:15.295 Nvme2n3 : 5.48 228.92 14.31 0.00 0.00 520044.55 7106.31 603036.48 00:09:15.295 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x8000 length 0x8000 00:09:15.295 Nvme2n3 : 5.83 216.63 13.54 0.00 0.00 498904.20 13686.23 1711410.79 00:09:15.295 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x0 length 0x2000 00:09:15.295 Nvme3n1 : 5.48 230.51 14.41 0.00 0.00 508601.68 7685.35 606405.40 00:09:15.295 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:15.295 Verification LBA range: start 0x2000 length 0x2000 00:09:15.295 Nvme3n1 : 6.00 313.83 19.61 0.00 0.00 336720.65 473.75 1738362.14 00:09:15.295 [2024-12-10T14:14:40.129Z] =================================================================================================================== 00:09:15.295 [2024-12-10T14:14:40.129Z] Total : 2507.57 156.72 0.00 0.00 559324.06 473.75 1738362.14 00:09:17.200 00:09:17.200 real 0m9.357s 00:09:17.200 user 0m17.366s 00:09:17.200 sys 0m0.445s 00:09:17.200 14:14:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.200 14:14:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:17.200 ************************************ 00:09:17.201 END TEST bdev_verify_big_io 00:09:17.201 ************************************ 00:09:17.460 14:14:42 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.460 14:14:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:17.460 14:14:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.460 14:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.460 ************************************ 00:09:17.460 START TEST bdev_write_zeroes 00:09:17.460 ************************************ 00:09:17.460 14:14:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.460 [2024-12-10 14:14:42.183211] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:17.460 [2024-12-10 14:14:42.183359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62953 ] 00:09:17.719 [2024-12-10 14:14:42.369125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.719 [2024-12-10 14:14:42.506187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.655 Running I/O for 1 seconds... 00:09:19.588 74496.00 IOPS, 291.00 MiB/s 00:09:19.588 Latency(us) 00:09:19.588 [2024-12-10T14:14:44.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.588 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme0n1 : 1.02 12362.07 48.29 0.00 0.00 10334.82 8632.85 28846.37 00:09:19.588 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme1n1 : 1.02 12350.79 48.25 0.00 0.00 10332.32 8843.41 29056.93 00:09:19.588 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme2n1 : 1.02 12339.50 48.20 0.00 0.00 10289.69 8527.58 25372.17 00:09:19.588 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme2n2 : 1.02 12327.45 48.15 0.00 0.00 10263.59 8527.58 23266.60 00:09:19.588 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme2n3 : 1.02 12316.18 48.11 0.00 0.00 10231.65 8422.30 20634.63 00:09:19.588 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:19.588 Nvme3n1 : 1.02 12305.30 48.07 0.00 0.00 10206.54 7001.03 20002.96 00:09:19.588 [2024-12-10T14:14:44.422Z] =================================================================================================================== 00:09:19.589 [2024-12-10T14:14:44.423Z] Total : 74001.30 289.07 0.00 0.00 10276.44 7001.03 29056.93 00:09:20.966 00:09:20.966 real 0m3.430s 00:09:20.966 user 0m2.963s 00:09:20.966 sys 0m0.352s 00:09:20.966 14:14:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.966 14:14:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:20.967 ************************************ 00:09:20.967 END TEST bdev_write_zeroes 00:09:20.967 ************************************ 00:09:20.967 14:14:45 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.967 14:14:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:20.967 14:14:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.967 14:14:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.967 ************************************ 00:09:20.967 START TEST bdev_json_nonenclosed 00:09:20.967 ************************************ 00:09:20.967 14:14:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:20.967 [2024-12-10 14:14:45.688864] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:20.967 [2024-12-10 14:14:45.688990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:09:21.226 [2024-12-10 14:14:45.868736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.226 [2024-12-10 14:14:45.992998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.226 [2024-12-10 14:14:45.993109] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:21.226 [2024-12-10 14:14:45.993133] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:21.226 [2024-12-10 14:14:45.993146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.485 00:09:21.485 real 0m0.668s 00:09:21.485 user 0m0.409s 00:09:21.485 sys 0m0.154s 00:09:21.485 14:14:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.485 ************************************ 00:09:21.485 END TEST bdev_json_nonenclosed 00:09:21.486 ************************************ 00:09:21.486 14:14:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:21.745 14:14:46 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.745 14:14:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:21.745 14:14:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.745 14:14:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.745 ************************************ 00:09:21.745 START TEST bdev_json_nonarray 00:09:21.745 ************************************ 00:09:21.745 14:14:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.745 [2024-12-10 14:14:46.437974] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:21.745 [2024-12-10 14:14:46.438098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:09:22.004 [2024-12-10 14:14:46.618174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.004 [2024-12-10 14:14:46.744266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.004 [2024-12-10 14:14:46.744393] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:22.004 [2024-12-10 14:14:46.744418] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:22.004 [2024-12-10 14:14:46.744431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.263 00:09:22.263 real 0m0.665s 00:09:22.263 user 0m0.414s 00:09:22.263 sys 0m0.146s 00:09:22.263 14:14:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.263 14:14:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 ************************************ 00:09:22.263 END TEST bdev_json_nonarray 00:09:22.263 ************************************ 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:22.263 14:14:47 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:22.263 00:09:22.263 real 0m44.290s 00:09:22.263 user 1m4.458s 00:09:22.263 sys 0m8.508s 00:09:22.263 14:14:47 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.263 14:14:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 ************************************ 00:09:22.263 END TEST blockdev_nvme 00:09:22.263 ************************************ 00:09:22.522 14:14:47 -- spdk/autotest.sh@209 -- # uname -s 00:09:22.522 14:14:47 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:22.522 14:14:47 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:22.522 14:14:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.522 14:14:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.522 14:14:47 -- common/autotest_common.sh@10 -- # set +x 00:09:22.522 ************************************ 00:09:22.522 START TEST blockdev_nvme_gpt 00:09:22.522 ************************************ 00:09:22.522 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:22.522 * Looking for test storage... 00:09:22.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:22.522 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.522 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.522 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.783 14:14:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.783 --rc genhtml_branch_coverage=1 00:09:22.783 --rc genhtml_function_coverage=1 00:09:22.783 --rc genhtml_legend=1 00:09:22.783 --rc geninfo_all_blocks=1 00:09:22.783 --rc geninfo_unexecuted_blocks=1 00:09:22.783 00:09:22.783 ' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.783 --rc genhtml_branch_coverage=1 00:09:22.783 --rc genhtml_function_coverage=1 00:09:22.783 --rc genhtml_legend=1 00:09:22.783 --rc geninfo_all_blocks=1 00:09:22.783 --rc geninfo_unexecuted_blocks=1 00:09:22.783 00:09:22.783 ' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.783 --rc genhtml_branch_coverage=1 00:09:22.783 --rc genhtml_function_coverage=1 00:09:22.783 --rc genhtml_legend=1 00:09:22.783 --rc geninfo_all_blocks=1 00:09:22.783 --rc geninfo_unexecuted_blocks=1 00:09:22.783 00:09:22.783 ' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.783 --rc genhtml_branch_coverage=1 00:09:22.783 --rc genhtml_function_coverage=1 00:09:22.783 --rc genhtml_legend=1 00:09:22.783 --rc geninfo_all_blocks=1 00:09:22.783 --rc geninfo_unexecuted_blocks=1 00:09:22.783 00:09:22.783 ' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63121 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 63121 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 63121 ']' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.783 14:14:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.783 [2024-12-10 14:14:47.522199] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:22.783 [2024-12-10 14:14:47.522336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63121 ] 00:09:23.042 [2024-12-10 14:14:47.702162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.042 [2024-12-10 14:14:47.830105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.420 14:14:48 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.420 14:14:48 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:09:24.420 14:14:48 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:24.420 14:14:48 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:09:24.420 14:14:48 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.938 Waiting for block devices as requested 00:09:24.938 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.196 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.196 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.455 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.725 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.725 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.725 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:09:30.726 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.726 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:09:30.726 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:30.726 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:30.726 BYT; 00:09:30.726 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:30.726 BYT; 00:09:30.726 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:30.726 14:14:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:30.726 14:14:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:31.664 The operation has completed successfully. 00:09:31.664 14:14:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:32.601 The operation has completed successfully. 00:09:32.601 14:14:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.105 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.105 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:34.364 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.364 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.364 [] 00:09:34.364 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:34.364 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:34.364 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.364 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.623 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.623 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:09:34.623 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.623 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.883 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.883 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.883 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:34.883 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:34.883 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b6fceef4-a0f1-4ebc-bcad-36335e892013"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b6fceef4-a0f1-4ebc-bcad-36335e892013",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1002cf6e-c04b-40f7-a25a-e06d072af218"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1002cf6e-c04b-40f7-a25a-e06d072af218",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "63a01830-2691-4342-bf2b-4cb3a9c33a02"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63a01830-2691-4342-bf2b-4cb3a9c33a02",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e6980ad7-ce34-46bc-8d13-84098cc386f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e6980ad7-ce34-46bc-8d13-84098cc386f8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6c6ad926-76e4-4831-a989-203ab58eb63c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6c6ad926-76e4-4831-a989-203ab58eb63c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:34.884 14:14:59 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 63121 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 63121 ']' 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 63121 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.884 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63121 00:09:35.143 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.143 killing process with pid 63121 00:09:35.143 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.143 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63121' 00:09:35.143 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 63121 00:09:35.143 14:14:59 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 63121 00:09:37.678 14:15:02 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:37.678 14:15:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:37.678 14:15:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:37.678 14:15:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.678 14:15:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:37.678 ************************************ 00:09:37.678 START TEST bdev_hello_world 00:09:37.678 ************************************ 00:09:37.678 14:15:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:37.678 [2024-12-10 14:15:02.439863] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:37.678 [2024-12-10 14:15:02.440020] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63769 ] 00:09:37.937 [2024-12-10 14:15:02.623765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.937 [2024-12-10 14:15:02.766313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.874 [2024-12-10 14:15:03.484907] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:38.874 [2024-12-10 14:15:03.484960] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:38.874 [2024-12-10 14:15:03.485003] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:38.874 [2024-12-10 14:15:03.488228] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:38.874 [2024-12-10 14:15:03.488923] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:38.874 [2024-12-10 14:15:03.488962] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:38.874 [2024-12-10 14:15:03.489181] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:38.874 00:09:38.874 [2024-12-10 14:15:03.489210] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:40.290 00:09:40.290 real 0m2.371s 00:09:40.290 user 0m1.935s 00:09:40.290 sys 0m0.327s 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 ************************************ 00:09:40.290 END TEST bdev_hello_world 00:09:40.290 ************************************ 00:09:40.290 14:15:04 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:40.290 14:15:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.290 14:15:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.290 14:15:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 ************************************ 00:09:40.290 START TEST bdev_bounds 00:09:40.290 ************************************ 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63818 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63818' 00:09:40.290 Process bdevio pid: 63818 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63818 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63818 ']' 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.290 14:15:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:40.290 [2024-12-10 14:15:04.888397] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:40.290 [2024-12-10 14:15:04.888533] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63818 ] 00:09:40.290 [2024-12-10 14:15:05.067954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.553 [2024-12-10 14:15:05.205109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.553 [2024-12-10 14:15:05.205312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.553 [2024-12-10 14:15:05.205315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.491 14:15:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.491 14:15:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:41.491 14:15:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:41.491 I/O targets: 00:09:41.491 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:41.491 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:41.491 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:41.491 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:41.491 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:41.491 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:41.491 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:41.491 00:09:41.491 00:09:41.491 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.491 http://cunit.sourceforge.net/ 00:09:41.491 00:09:41.491 00:09:41.491 Suite: bdevio tests on: Nvme3n1 00:09:41.491 Test: blockdev write read block ...passed 00:09:41.491 Test: blockdev write zeroes read block ...passed 00:09:41.491 Test: blockdev write zeroes read no split ...passed 00:09:41.491 Test: blockdev write zeroes read split ...passed 00:09:41.491 Test: blockdev write zeroes read split partial ...passed 00:09:41.491 Test: blockdev reset ...[2024-12-10 14:15:06.139483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:41.491 [2024-12-10 14:15:06.143755] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:41.491 passed 00:09:41.491 Test: blockdev write read 8 blocks ...passed 00:09:41.491 Test: blockdev write read size > 128k ...passed 00:09:41.491 Test: blockdev write read invalid size ...passed 00:09:41.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.491 Test: blockdev write read max offset ...passed 00:09:41.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.491 Test: blockdev writev readv 8 blocks ...passed 00:09:41.491 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.491 Test: blockdev writev readv block ...passed 00:09:41.491 Test: blockdev writev readv size > 128k ...passed 00:09:41.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.491 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.154586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3604000 len:0x1000 00:09:41.491 [2024-12-10 14:15:06.154651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.491 passed 00:09:41.491 Test: blockdev nvme passthru rw ...passed 00:09:41.491 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:15:06.155760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.491 [2024-12-10 14:15:06.155800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.491 passed 00:09:41.491 Test: blockdev nvme admin passthru ...passed 00:09:41.491 Test: blockdev copy ...passed 00:09:41.491 Suite: bdevio tests on: Nvme2n3 00:09:41.491 Test: blockdev write read block ...passed 00:09:41.491 Test: blockdev write zeroes read block ...passed 00:09:41.491 Test: blockdev write zeroes read no split ...passed 00:09:41.491 Test: blockdev write zeroes read split ...passed 00:09:41.491 Test: blockdev write zeroes read split partial ...passed 00:09:41.491 Test: blockdev reset ...[2024-12-10 14:15:06.240132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:41.491 [2024-12-10 14:15:06.244892] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:41.491 passed 00:09:41.491 Test: blockdev write read 8 blocks ...passed 00:09:41.491 Test: blockdev write read size > 128k ...passed 00:09:41.491 Test: blockdev write read invalid size ...passed 00:09:41.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.491 Test: blockdev write read max offset ...passed 00:09:41.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.491 Test: blockdev writev readv 8 blocks ...passed 00:09:41.491 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.492 Test: blockdev writev readv block ...passed 00:09:41.492 Test: blockdev writev readv size > 128k ...passed 00:09:41.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.492 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.254503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3602000 len:0x1000 00:09:41.492 [2024-12-10 14:15:06.254570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.492 passed 00:09:41.492 Test: blockdev nvme passthru rw ...passed 00:09:41.492 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.492 Test: blockdev nvme admin passthru ...[2024-12-10 14:15:06.255472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.492 [2024-12-10 14:15:06.255506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.492 passed 00:09:41.492 Test: blockdev copy ...passed 00:09:41.492 Suite: bdevio tests on: Nvme2n2 00:09:41.492 Test: blockdev write read block ...passed 00:09:41.492 Test: blockdev write zeroes read block ...passed 00:09:41.492 Test: blockdev write zeroes read no split ...passed 00:09:41.492 Test: blockdev write zeroes read split ...passed 00:09:41.751 Test: blockdev write zeroes read split partial ...passed 00:09:41.751 Test: blockdev reset ...[2024-12-10 14:15:06.337358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:41.751 [2024-12-10 14:15:06.342053] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:41.751 passed 00:09:41.751 Test: blockdev write read 8 blocks ...passed 00:09:41.751 Test: blockdev write read size > 128k ...passed 00:09:41.751 Test: blockdev write read invalid size ...passed 00:09:41.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.751 Test: blockdev write read max offset ...passed 00:09:41.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.751 Test: blockdev writev readv 8 blocks ...passed 00:09:41.751 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.751 Test: blockdev writev readv block ...passed 00:09:41.751 Test: blockdev writev readv size > 128k ...passed 00:09:41.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.751 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.351367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7438000 len:0x1000 00:09:41.751 [2024-12-10 14:15:06.351427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.751 passed 00:09:41.751 Test: blockdev nvme passthru rw ...passed 00:09:41.751 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:15:06.352566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.751 [2024-12-10 14:15:06.352600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.751 passed 00:09:41.751 Test: blockdev nvme admin passthru ...passed 00:09:41.751 Test: blockdev copy ...passed 00:09:41.751 Suite: bdevio tests on: Nvme2n1 00:09:41.751 Test: blockdev write read block ...passed 00:09:41.751 Test: blockdev write zeroes read block ...passed 00:09:41.751 Test: blockdev write zeroes read no split ...passed 00:09:41.751 Test: blockdev write zeroes read split ...passed 00:09:41.751 Test: blockdev write zeroes read split partial ...passed 00:09:41.751 Test: blockdev reset ...[2024-12-10 14:15:06.430504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:41.751 [2024-12-10 14:15:06.435126] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:41.751 passed 00:09:41.751 Test: blockdev write read 8 blocks ...passed 00:09:41.751 Test: blockdev write read size > 128k ...passed 00:09:41.751 Test: blockdev write read invalid size ...passed 00:09:41.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.751 Test: blockdev write read max offset ...passed 00:09:41.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.751 Test: blockdev writev readv 8 blocks ...passed 00:09:41.751 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.751 Test: blockdev writev readv block ...passed 00:09:41.751 Test: blockdev writev readv size > 128k ...passed 00:09:41.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.751 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.444276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7434000 len:0x1000 00:09:41.751 [2024-12-10 14:15:06.444335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.751 passed 00:09:41.751 Test: blockdev nvme passthru rw ...passed 00:09:41.751 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.751 Test: blockdev nvme admin passthru ...[2024-12-10 14:15:06.445307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.751 [2024-12-10 14:15:06.445340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.751 passed 00:09:41.751 Test: blockdev copy ...passed 00:09:41.751 Suite: bdevio tests on: Nvme1n1p2 00:09:41.751 Test: blockdev write read block ...passed 00:09:41.751 Test: blockdev write zeroes read block ...passed 00:09:41.751 Test: blockdev write zeroes read no split ...passed 00:09:41.751 Test: blockdev write zeroes read split ...passed 00:09:41.751 Test: blockdev write zeroes read split partial ...passed 00:09:41.751 Test: blockdev reset ...[2024-12-10 14:15:06.525947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:41.751 [2024-12-10 14:15:06.530099] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:41.751 passed 00:09:41.751 Test: blockdev write read 8 blocks ...passed 00:09:41.752 Test: blockdev write read size > 128k ...passed 00:09:41.752 Test: blockdev write read invalid size ...passed 00:09:41.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.752 Test: blockdev write read max offset ...passed 00:09:41.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.752 Test: blockdev writev readv 8 blocks ...passed 00:09:41.752 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.752 Test: blockdev writev readv block ...passed 00:09:41.752 Test: blockdev writev readv size > 128k ...passed 00:09:41.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.752 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.540232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7430000 len:0x1000 00:09:41.752 [2024-12-10 14:15:06.540288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.752 passed 00:09:41.752 Test: blockdev nvme passthru rw ...passed 00:09:41.752 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.752 Test: blockdev nvme admin passthru ...passed 00:09:41.752 Test: blockdev copy ...passed 00:09:41.752 Suite: bdevio tests on: Nvme1n1p1 00:09:41.752 Test: blockdev write read block ...passed 00:09:41.752 Test: blockdev write zeroes read block ...passed 00:09:41.752 Test: blockdev write zeroes read no split ...passed 00:09:41.752 Test: blockdev write zeroes read split ...passed 00:09:42.011 Test: blockdev write zeroes read split partial ...passed 00:09:42.011 Test: blockdev reset ...[2024-12-10 14:15:06.611705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:42.011 [2024-12-10 14:15:06.615976] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:42.011 passed 00:09:42.011 Test: blockdev write read 8 blocks ...passed 00:09:42.011 Test: blockdev write read size > 128k ...passed 00:09:42.011 Test: blockdev write read invalid size ...passed 00:09:42.011 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.011 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.011 Test: blockdev write read max offset ...passed 00:09:42.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.011 Test: blockdev writev readv 8 blocks ...passed 00:09:42.011 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.011 Test: blockdev writev readv block ...passed 00:09:42.011 Test: blockdev writev readv size > 128k ...passed 00:09:42.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.011 Test: blockdev comparev and writev ...[2024-12-10 14:15:06.625519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b380e000 len:0x1000 00:09:42.011 [2024-12-10 14:15:06.625572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.011 passed 00:09:42.011 Test: blockdev nvme passthru rw ...passed 00:09:42.011 Test: blockdev nvme passthru vendor specific ...passed 00:09:42.011 Test: blockdev nvme admin passthru ...passed 00:09:42.011 Test: blockdev copy ...passed 00:09:42.011 Suite: bdevio tests on: Nvme0n1 00:09:42.011 Test: blockdev write read block ...passed 00:09:42.011 Test: blockdev write zeroes read block ...passed 00:09:42.011 Test: blockdev write zeroes read no split ...passed 00:09:42.011 Test: blockdev write zeroes read split ...passed 00:09:42.012 Test: blockdev write zeroes read split partial ...passed 00:09:42.012 Test: blockdev reset ...[2024-12-10 14:15:06.696259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:42.012 [2024-12-10 14:15:06.700435] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:42.012 passed 00:09:42.012 Test: blockdev write read 8 blocks ...passed 00:09:42.012 Test: blockdev write read size > 128k ...passed 00:09:42.012 Test: blockdev write read invalid size ...passed 00:09:42.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.012 Test: blockdev write read max offset ...passed 00:09:42.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.012 Test: blockdev writev readv 8 blocks ...passed 00:09:42.012 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.012 Test: blockdev writev readv block ...passed 00:09:42.012 Test: blockdev writev readv size > 128k ...passed 00:09:42.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.012 Test: blockdev comparev and writev ...passed 00:09:42.012 Test: blockdev nvme passthru rw ...[2024-12-10 14:15:06.708755] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:42.012 separate metadata which is not supported yet. 00:09:42.012 passed 00:09:42.012 Test: blockdev nvme passthru vendor specific ...[2024-12-10 14:15:06.709419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:42.012 [2024-12-10 14:15:06.709464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:42.012 passed 00:09:42.012 Test: blockdev nvme admin passthru ...passed 00:09:42.012 Test: blockdev copy ...passed 00:09:42.012 00:09:42.012 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.012 suites 7 7 n/a 0 0 00:09:42.012 tests 161 161 161 0 0 00:09:42.012 asserts 1025 1025 1025 0 n/a 00:09:42.012 00:09:42.012 Elapsed time = 1.757 seconds 00:09:42.012 0 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63818 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63818 ']' 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63818 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63818 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.012 killing process with pid 63818 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63818' 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63818 00:09:42.012 14:15:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63818 00:09:43.392 14:15:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:43.392 00:09:43.392 real 0m3.121s 00:09:43.392 user 0m7.897s 00:09:43.392 sys 0m0.503s 00:09:43.392 14:15:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.392 14:15:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:43.392 ************************************ 00:09:43.393 END TEST bdev_bounds 00:09:43.393 ************************************ 00:09:43.393 14:15:07 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:43.393 14:15:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.393 14:15:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.393 14:15:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.393 ************************************ 00:09:43.393 START TEST bdev_nbd 00:09:43.393 ************************************ 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63883 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63883 /var/tmp/spdk-nbd.sock 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63883 ']' 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.393 14:15:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.393 14:15:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:43.393 [2024-12-10 14:15:08.094211] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:43.393 [2024-12-10 14:15:08.094342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.652 [2024-12-10 14:15:08.287656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.652 [2024-12-10 14:15:08.419834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.590 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.849 1+0 records in 00:09:44.849 1+0 records out 00:09:44.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669518 s, 6.1 MB/s 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:44.849 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.109 1+0 records in 00:09:45.109 1+0 records out 00:09:45.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725166 s, 5.6 MB/s 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.109 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.368 1+0 records in 00:09:45.368 1+0 records out 00:09:45.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102469 s, 4.0 MB/s 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.368 14:15:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.627 1+0 records in 00:09:45.627 1+0 records out 00:09:45.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811754 s, 5.0 MB/s 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.627 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.887 1+0 records in 00:09:45.887 1+0 records out 00:09:45.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664061 s, 6.2 MB/s 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.887 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.146 1+0 records in 00:09:46.146 1+0 records out 00:09:46.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758655 s, 5.4 MB/s 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:46.146 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:46.405 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.406 1+0 records in 00:09:46.406 1+0 records out 00:09:46.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776783 s, 5.3 MB/s 00:09:46.406 14:15:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd0", 00:09:46.406 "bdev_name": "Nvme0n1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd1", 00:09:46.406 "bdev_name": "Nvme1n1p1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd2", 00:09:46.406 "bdev_name": "Nvme1n1p2" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd3", 00:09:46.406 "bdev_name": "Nvme2n1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd4", 00:09:46.406 "bdev_name": "Nvme2n2" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd5", 00:09:46.406 "bdev_name": "Nvme2n3" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd6", 00:09:46.406 "bdev_name": "Nvme3n1" 00:09:46.406 } 00:09:46.406 ]' 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:46.406 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd0", 00:09:46.406 "bdev_name": "Nvme0n1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd1", 00:09:46.406 "bdev_name": "Nvme1n1p1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd2", 00:09:46.406 "bdev_name": "Nvme1n1p2" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd3", 00:09:46.406 "bdev_name": "Nvme2n1" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd4", 00:09:46.406 "bdev_name": "Nvme2n2" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd5", 00:09:46.406 "bdev_name": "Nvme2n3" 00:09:46.406 }, 00:09:46.406 { 00:09:46.406 "nbd_device": "/dev/nbd6", 00:09:46.406 "bdev_name": "Nvme3n1" 00:09:46.406 } 00:09:46.406 ]' 00:09:46.665 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:46.665 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.665 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.666 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.925 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.184 14:15:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.443 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.702 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.962 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.221 14:15:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.221 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.221 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.221 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.221 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.221 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:48.480 /dev/nbd0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.480 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.739 1+0 records in 00:09:48.739 1+0 records out 00:09:48.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066996 s, 6.1 MB/s 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:48.739 /dev/nbd1 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.739 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.998 1+0 records in 00:09:48.998 1+0 records out 00:09:48.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654414 s, 6.3 MB/s 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:48.998 /dev/nbd10 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.998 1+0 records in 00:09:48.998 1+0 records out 00:09:48.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056972 s, 7.2 MB/s 00:09:48.998 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.257 14:15:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:49.257 /dev/nbd11 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.257 1+0 records in 00:09:49.257 1+0 records out 00:09:49.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612463 s, 6.7 MB/s 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.257 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:49.516 /dev/nbd12 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.516 1+0 records in 00:09:49.516 1+0 records out 00:09:49.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075291 s, 5.4 MB/s 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.516 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.517 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.517 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.517 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.517 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:49.775 /dev/nbd13 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.775 1+0 records in 00:09:49.775 1+0 records out 00:09:49.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742296 s, 5.5 MB/s 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.775 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:50.034 /dev/nbd14 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.034 1+0 records in 00:09:50.034 1+0 records out 00:09:50.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130012 s, 3.2 MB/s 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.034 14:15:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd0", 00:09:50.293 "bdev_name": "Nvme0n1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd1", 00:09:50.293 "bdev_name": "Nvme1n1p1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd10", 00:09:50.293 "bdev_name": "Nvme1n1p2" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd11", 00:09:50.293 "bdev_name": "Nvme2n1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd12", 00:09:50.293 "bdev_name": "Nvme2n2" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd13", 00:09:50.293 "bdev_name": "Nvme2n3" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd14", 00:09:50.293 "bdev_name": "Nvme3n1" 00:09:50.293 } 00:09:50.293 ]' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd0", 00:09:50.293 "bdev_name": "Nvme0n1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd1", 00:09:50.293 "bdev_name": "Nvme1n1p1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd10", 00:09:50.293 "bdev_name": "Nvme1n1p2" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd11", 00:09:50.293 "bdev_name": "Nvme2n1" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd12", 00:09:50.293 "bdev_name": "Nvme2n2" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd13", 00:09:50.293 "bdev_name": "Nvme2n3" 00:09:50.293 }, 00:09:50.293 { 00:09:50.293 "nbd_device": "/dev/nbd14", 00:09:50.293 "bdev_name": "Nvme3n1" 00:09:50.293 } 00:09:50.293 ]' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:50.293 /dev/nbd1 00:09:50.293 /dev/nbd10 00:09:50.293 /dev/nbd11 00:09:50.293 /dev/nbd12 00:09:50.293 /dev/nbd13 00:09:50.293 /dev/nbd14' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:50.293 /dev/nbd1 00:09:50.293 /dev/nbd10 00:09:50.293 /dev/nbd11 00:09:50.293 /dev/nbd12 00:09:50.293 /dev/nbd13 00:09:50.293 /dev/nbd14' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:50.293 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:50.552 256+0 records in 00:09:50.552 256+0 records out 00:09:50.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116107 s, 90.3 MB/s 00:09:50.552 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.552 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:50.552 256+0 records in 00:09:50.552 256+0 records out 00:09:50.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149055 s, 7.0 MB/s 00:09:50.552 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.552 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:50.810 256+0 records in 00:09:50.810 256+0 records out 00:09:50.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152991 s, 6.9 MB/s 00:09:50.810 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.810 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:50.810 256+0 records in 00:09:50.810 256+0 records out 00:09:50.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152096 s, 6.9 MB/s 00:09:50.810 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:50.810 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:51.069 256+0 records in 00:09:51.069 256+0 records out 00:09:51.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150997 s, 6.9 MB/s 00:09:51.069 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.069 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:51.328 256+0 records in 00:09:51.328 256+0 records out 00:09:51.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150225 s, 7.0 MB/s 00:09:51.328 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.328 14:15:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:51.328 256+0 records in 00:09:51.328 256+0 records out 00:09:51.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147514 s, 7.1 MB/s 00:09:51.328 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.328 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:51.587 256+0 records in 00:09:51.587 256+0 records out 00:09:51.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148614 s, 7.1 MB/s 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.587 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:51.846 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:51.846 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:51.846 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.847 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.108 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.367 14:15:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.367 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.626 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.885 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.144 14:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:53.403 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:53.662 malloc_lvol_verify 00:09:53.662 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:53.921 b48cbe73-207b-4ddb-84fd-32b687c6fa35 00:09:53.921 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:54.180 14ca2c29-2218-4daa-abe7-893e3f5d1d47 00:09:54.180 14:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:54.439 /dev/nbd0 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:54.439 mke2fs 1.47.0 (5-Feb-2023) 00:09:54.439 Discarding device blocks: 0/4096 done 00:09:54.439 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:54.439 00:09:54.439 Allocating group tables: 0/1 done 00:09:54.439 Writing inode tables: 0/1 done 00:09:54.439 Creating journal (1024 blocks): done 00:09:54.439 Writing superblocks and filesystem accounting information: 0/1 done 00:09:54.439 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.439 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63883 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63883 ']' 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63883 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63883 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.698 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63883' 00:09:54.698 killing process with pid 63883 00:09:54.699 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63883 00:09:54.699 14:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63883 00:09:56.078 14:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:56.078 ************************************ 00:09:56.078 END TEST bdev_nbd 00:09:56.078 ************************************ 00:09:56.078 00:09:56.078 real 0m12.645s 00:09:56.078 user 0m15.963s 00:09:56.078 sys 0m5.430s 00:09:56.078 14:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.078 14:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:56.078 skipping fio tests on NVMe due to multi-ns failures. 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:56.078 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.078 14:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:56.078 14:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.078 14:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.078 ************************************ 00:09:56.078 START TEST bdev_verify 00:09:56.078 ************************************ 00:09:56.078 14:15:20 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.078 [2024-12-10 14:15:20.821274] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:09:56.078 [2024-12-10 14:15:20.821402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64313 ] 00:09:56.338 [2024-12-10 14:15:21.008987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.338 [2024-12-10 14:15:21.147440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.338 [2024-12-10 14:15:21.147470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.275 Running I/O for 5 seconds... 00:09:59.588 20800.00 IOPS, 81.25 MiB/s [2024-12-10T14:15:25.358Z] 20960.00 IOPS, 81.88 MiB/s [2024-12-10T14:15:26.295Z] 20629.33 IOPS, 80.58 MiB/s [2024-12-10T14:15:27.232Z] 19840.00 IOPS, 77.50 MiB/s [2024-12-10T14:15:27.232Z] 19238.40 IOPS, 75.15 MiB/s 00:10:02.398 Latency(us) 00:10:02.398 [2024-12-10T14:15:27.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.398 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0xbd0bd 00:10:02.398 Nvme0n1 : 5.05 1393.83 5.44 0.00 0.00 91393.91 19687.12 95171.96 00:10:02.398 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:02.398 Nvme0n1 : 5.11 1303.23 5.09 0.00 0.00 98009.06 17265.71 94329.73 00:10:02.398 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x4ff80 00:10:02.398 Nvme1n1p1 : 5.10 1392.34 5.44 0.00 0.00 91203.99 18634.33 89276.35 00:10:02.398 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:02.398 Nvme1n1p1 : 5.11 1302.33 5.09 0.00 0.00 97891.17 18950.17 93066.38 00:10:02.398 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x4ff7f 00:10:02.398 Nvme1n1p2 : 5.12 1400.16 5.47 0.00 0.00 90809.75 12949.28 76642.90 00:10:02.398 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:02.398 Nvme1n1p2 : 5.11 1301.99 5.09 0.00 0.00 97602.61 19476.56 92224.15 00:10:02.398 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x80000 00:10:02.398 Nvme2n1 : 5.12 1399.41 5.47 0.00 0.00 90659.93 14633.74 77485.13 00:10:02.398 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x80000 length 0x80000 00:10:02.398 Nvme2n1 : 5.11 1301.70 5.08 0.00 0.00 97428.35 18739.61 90118.58 00:10:02.398 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x80000 00:10:02.398 Nvme2n2 : 5.12 1399.11 5.47 0.00 0.00 90523.91 14633.74 79590.71 00:10:02.398 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x80000 length 0x80000 00:10:02.398 Nvme2n2 : 5.11 1301.41 5.08 0.00 0.00 97277.12 18950.17 93066.38 00:10:02.398 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x80000 00:10:02.398 Nvme2n3 : 5.12 1398.81 5.46 0.00 0.00 90371.70 14633.74 82538.51 00:10:02.398 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x80000 length 0x80000 00:10:02.398 Nvme2n3 : 5.12 1300.76 5.08 0.00 0.00 97125.89 17476.27 93066.38 00:10:02.398 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x0 length 0x20000 00:10:02.398 Nvme3n1 : 5.13 1398.49 5.46 0.00 0.00 90220.05 14212.63 82117.40 00:10:02.398 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.398 Verification LBA range: start 0x20000 length 0x20000 00:10:02.398 Nvme3n1 : 5.12 1300.18 5.08 0.00 0.00 96975.28 18002.66 93908.61 00:10:02.398 [2024-12-10T14:15:27.232Z] =================================================================================================================== 00:10:02.398 [2024-12-10T14:15:27.232Z] Total : 18893.77 73.80 0.00 0.00 93987.22 12949.28 95171.96 00:10:03.774 00:10:03.774 real 0m7.886s 00:10:03.774 user 0m14.298s 00:10:03.774 sys 0m0.419s 00:10:03.774 14:15:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.774 14:15:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:03.774 ************************************ 00:10:03.774 END TEST bdev_verify 00:10:03.774 ************************************ 00:10:04.034 14:15:28 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:04.034 14:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:04.034 14:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.034 14:15:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:04.034 ************************************ 00:10:04.034 START TEST bdev_verify_big_io 00:10:04.034 ************************************ 00:10:04.034 14:15:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:04.034 [2024-12-10 14:15:28.759534] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:04.034 [2024-12-10 14:15:28.759683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64417 ] 00:10:04.293 [2024-12-10 14:15:28.944041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.293 [2024-12-10 14:15:29.079184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.293 [2024-12-10 14:15:29.079229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.297 Running I/O for 5 seconds... 00:10:09.546 1687.00 IOPS, 105.44 MiB/s [2024-12-10T14:15:35.758Z] 2775.00 IOPS, 173.44 MiB/s [2024-12-10T14:15:36.696Z] 2897.67 IOPS, 181.10 MiB/s 00:10:11.862 Latency(us) 00:10:11.862 [2024-12-10T14:15:36.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.862 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0xbd0b 00:10:11.862 Nvme0n1 : 5.50 168.58 10.54 0.00 0.00 729132.02 36215.88 1138694.58 00:10:11.862 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:11.862 Nvme0n1 : 5.68 94.70 5.92 0.00 0.00 1300740.24 30109.71 1886594.57 00:10:11.862 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x4ff8 00:10:11.862 Nvme1n1p1 : 5.53 190.74 11.92 0.00 0.00 641359.93 75379.56 677152.69 00:10:11.862 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:11.862 Nvme1n1p1 : 5.73 97.92 6.12 0.00 0.00 1192750.04 41269.26 1900070.25 00:10:11.862 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x4ff7 00:10:11.862 Nvme1n1p2 : 5.53 196.61 12.29 0.00 0.00 619502.20 23687.71 626618.91 00:10:11.862 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:11.862 Nvme1n1p2 : 5.73 103.29 6.46 0.00 0.00 1094408.16 41058.70 1940497.27 00:10:11.862 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x8000 00:10:11.862 Nvme2n1 : 5.54 196.53 12.28 0.00 0.00 608639.38 24424.66 633356.75 00:10:11.862 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x8000 length 0x8000 00:10:11.862 Nvme2n1 : 5.82 114.21 7.14 0.00 0.00 970686.37 29056.93 1994399.97 00:10:11.862 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x8000 00:10:11.862 Nvme2n2 : 5.57 203.00 12.69 0.00 0.00 581923.72 19055.45 646832.42 00:10:11.862 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x8000 length 0x8000 00:10:11.862 Nvme2n2 : 5.95 138.54 8.66 0.00 0.00 771776.69 20318.79 2021351.33 00:10:11.862 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x8000 00:10:11.862 Nvme2n3 : 5.56 201.41 12.59 0.00 0.00 576535.29 19581.84 656939.18 00:10:11.862 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x8000 length 0x8000 00:10:11.862 Nvme2n3 : 6.11 184.25 11.52 0.00 0.00 560509.63 7737.99 2048302.68 00:10:11.862 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:11.862 Verification LBA range: start 0x0 length 0x2000 00:10:11.862 Nvme3n1 : 5.57 210.46 13.15 0.00 0.00 543374.78 4684.90 670414.86 00:10:11.863 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:11.863 Verification LBA range: start 0x2000 length 0x2000 00:10:11.863 Nvme3n1 : 6.30 288.30 18.02 0.00 0.00 350436.89 1164.65 1509275.66 00:10:11.863 [2024-12-10T14:15:36.697Z] =================================================================================================================== 00:10:11.863 [2024-12-10T14:15:36.697Z] Total : 2388.55 149.28 0.00 0.00 671025.81 1164.65 2048302.68 00:10:13.769 00:10:13.769 real 0m9.767s 00:10:13.769 user 0m18.209s 00:10:13.769 sys 0m0.419s 00:10:13.769 14:15:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.769 ************************************ 00:10:13.769 END TEST bdev_verify_big_io 00:10:13.769 ************************************ 00:10:13.769 14:15:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:13.769 14:15:38 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:13.769 14:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:13.769 14:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.769 14:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:13.769 ************************************ 00:10:13.769 START TEST bdev_write_zeroes 00:10:13.769 ************************************ 00:10:13.769 14:15:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:14.028 [2024-12-10 14:15:38.622815] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:14.028 [2024-12-10 14:15:38.622957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64537 ] 00:10:14.028 [2024-12-10 14:15:38.808575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.288 [2024-12-10 14:15:38.944125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.225 Running I/O for 1 seconds... 00:10:16.159 65856.00 IOPS, 257.25 MiB/s 00:10:16.159 Latency(us) 00:10:16.159 [2024-12-10T14:15:40.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.159 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme0n1 : 1.03 9360.72 36.57 0.00 0.00 13631.96 11738.58 37479.22 00:10:16.159 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme1n1p1 : 1.03 9350.35 36.52 0.00 0.00 13625.45 12054.41 38321.45 00:10:16.159 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme1n1p2 : 1.03 9340.34 36.49 0.00 0.00 13572.46 11791.22 35373.65 00:10:16.159 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme2n1 : 1.03 9384.43 36.66 0.00 0.00 13439.10 7211.59 28214.70 00:10:16.159 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme2n2 : 1.03 9376.43 36.63 0.00 0.00 13431.59 7158.95 28214.70 00:10:16.159 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme2n3 : 1.03 9367.99 36.59 0.00 0.00 13372.28 7158.95 24003.55 00:10:16.159 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.159 Nvme3n1 : 1.03 9359.92 36.56 0.00 0.00 13336.80 7264.23 23582.43 00:10:16.159 [2024-12-10T14:15:40.993Z] =================================================================================================================== 00:10:16.159 [2024-12-10T14:15:40.993Z] Total : 65540.17 256.02 0.00 0.00 13486.74 7158.95 38321.45 00:10:17.538 00:10:17.538 real 0m3.451s 00:10:17.538 user 0m2.985s 00:10:17.538 sys 0m0.351s 00:10:17.538 14:15:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.538 14:15:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:17.538 ************************************ 00:10:17.538 END TEST bdev_write_zeroes 00:10:17.538 ************************************ 00:10:17.538 14:15:42 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.538 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:17.538 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.538 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:17.538 ************************************ 00:10:17.538 START TEST bdev_json_nonenclosed 00:10:17.538 ************************************ 00:10:17.538 14:15:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.538 [2024-12-10 14:15:42.150010] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:17.538 [2024-12-10 14:15:42.150140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64590 ] 00:10:17.538 [2024-12-10 14:15:42.335401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.798 [2024-12-10 14:15:42.468976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.798 [2024-12-10 14:15:42.469089] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:17.798 [2024-12-10 14:15:42.469115] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:17.798 [2024-12-10 14:15:42.469128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.057 00:10:18.057 real 0m0.688s 00:10:18.057 user 0m0.413s 00:10:18.057 sys 0m0.171s 00:10:18.057 14:15:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.057 ************************************ 00:10:18.057 END TEST bdev_json_nonenclosed 00:10:18.057 ************************************ 00:10:18.057 14:15:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:18.057 14:15:42 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.057 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:18.057 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.057 14:15:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.057 ************************************ 00:10:18.057 START TEST bdev_json_nonarray 00:10:18.057 ************************************ 00:10:18.057 14:15:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.317 [2024-12-10 14:15:42.907047] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:18.317 [2024-12-10 14:15:42.907192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64621 ] 00:10:18.317 [2024-12-10 14:15:43.088759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.576 [2024-12-10 14:15:43.215878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.576 [2024-12-10 14:15:43.216017] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:18.576 [2024-12-10 14:15:43.216044] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:18.576 [2024-12-10 14:15:43.216059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.836 00:10:18.836 real 0m0.673s 00:10:18.836 user 0m0.406s 00:10:18.836 sys 0m0.163s 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:18.836 ************************************ 00:10:18.836 END TEST bdev_json_nonarray 00:10:18.836 ************************************ 00:10:18.836 14:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:10:18.836 14:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:10:18.836 14:15:43 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:18.836 14:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.836 14:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.836 14:15:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.836 ************************************ 00:10:18.836 START TEST bdev_gpt_uuid 00:10:18.836 ************************************ 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64647 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64647 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64647 ']' 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.836 14:15:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:19.095 [2024-12-10 14:15:43.669786] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:19.096 [2024-12-10 14:15:43.669935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64647 ] 00:10:19.096 [2024-12-10 14:15:43.850990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.355 [2024-12-10 14:15:43.982080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.293 14:15:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.293 14:15:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:10:20.293 14:15:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:20.293 14:15:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.293 14:15:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:20.551 Some configs were skipped because the RPC state that can call them passed over. 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:10:20.551 { 00:10:20.551 "name": "Nvme1n1p1", 00:10:20.551 "aliases": [ 00:10:20.551 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:20.551 ], 00:10:20.551 "product_name": "GPT Disk", 00:10:20.551 "block_size": 4096, 00:10:20.551 "num_blocks": 655104, 00:10:20.551 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:20.551 "assigned_rate_limits": { 00:10:20.551 "rw_ios_per_sec": 0, 00:10:20.551 "rw_mbytes_per_sec": 0, 00:10:20.551 "r_mbytes_per_sec": 0, 00:10:20.551 "w_mbytes_per_sec": 0 00:10:20.551 }, 00:10:20.551 "claimed": false, 00:10:20.551 "zoned": false, 00:10:20.551 "supported_io_types": { 00:10:20.551 "read": true, 00:10:20.551 "write": true, 00:10:20.551 "unmap": true, 00:10:20.551 "flush": true, 00:10:20.551 "reset": true, 00:10:20.551 "nvme_admin": false, 00:10:20.551 "nvme_io": false, 00:10:20.551 "nvme_io_md": false, 00:10:20.551 "write_zeroes": true, 00:10:20.551 "zcopy": false, 00:10:20.551 "get_zone_info": false, 00:10:20.551 "zone_management": false, 00:10:20.551 "zone_append": false, 00:10:20.551 "compare": true, 00:10:20.551 "compare_and_write": false, 00:10:20.551 "abort": true, 00:10:20.551 "seek_hole": false, 00:10:20.551 "seek_data": false, 00:10:20.551 "copy": true, 00:10:20.551 "nvme_iov_md": false 00:10:20.551 }, 00:10:20.551 "driver_specific": { 00:10:20.551 "gpt": { 00:10:20.551 "base_bdev": "Nvme1n1", 00:10:20.551 "offset_blocks": 256, 00:10:20.551 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:20.551 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:20.551 "partition_name": "SPDK_TEST_first" 00:10:20.551 } 00:10:20.551 } 00:10:20.551 } 00:10:20.551 ]' 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:10:20.551 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.812 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:10:20.812 { 00:10:20.812 "name": "Nvme1n1p2", 00:10:20.812 "aliases": [ 00:10:20.812 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:20.812 ], 00:10:20.812 "product_name": "GPT Disk", 00:10:20.812 "block_size": 4096, 00:10:20.812 "num_blocks": 655103, 00:10:20.812 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:20.812 "assigned_rate_limits": { 00:10:20.812 "rw_ios_per_sec": 0, 00:10:20.812 "rw_mbytes_per_sec": 0, 00:10:20.812 "r_mbytes_per_sec": 0, 00:10:20.812 "w_mbytes_per_sec": 0 00:10:20.812 }, 00:10:20.812 "claimed": false, 00:10:20.812 "zoned": false, 00:10:20.812 "supported_io_types": { 00:10:20.812 "read": true, 00:10:20.812 "write": true, 00:10:20.812 "unmap": true, 00:10:20.812 "flush": true, 00:10:20.812 "reset": true, 00:10:20.812 "nvme_admin": false, 00:10:20.812 "nvme_io": false, 00:10:20.812 "nvme_io_md": false, 00:10:20.812 "write_zeroes": true, 00:10:20.812 "zcopy": false, 00:10:20.812 "get_zone_info": false, 00:10:20.812 "zone_management": false, 00:10:20.812 "zone_append": false, 00:10:20.812 "compare": true, 00:10:20.812 "compare_and_write": false, 00:10:20.812 "abort": true, 00:10:20.812 "seek_hole": false, 00:10:20.812 "seek_data": false, 00:10:20.812 "copy": true, 00:10:20.812 "nvme_iov_md": false 00:10:20.812 }, 00:10:20.812 "driver_specific": { 00:10:20.812 "gpt": { 00:10:20.812 "base_bdev": "Nvme1n1", 00:10:20.812 "offset_blocks": 655360, 00:10:20.812 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:20.813 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:20.813 "partition_name": "SPDK_TEST_second" 00:10:20.813 } 00:10:20.813 } 00:10:20.813 } 00:10:20.813 ]' 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64647 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64647 ']' 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64647 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.813 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64647 00:10:21.095 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.095 killing process with pid 64647 00:10:21.095 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.095 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64647' 00:10:21.095 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64647 00:10:21.095 14:15:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64647 00:10:23.651 00:10:23.651 real 0m4.607s 00:10:23.651 user 0m4.559s 00:10:23.651 sys 0m0.686s 00:10:23.651 14:15:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.651 14:15:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 ************************************ 00:10:23.651 END TEST bdev_gpt_uuid 00:10:23.651 ************************************ 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:23.651 14:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.478 Waiting for block devices as requested 00:10:24.478 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.478 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.737 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.737 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.013 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.013 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:30.013 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:30.013 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:30.013 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:30.013 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:30.013 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:30.013 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:30.013 00:10:30.013 real 1m7.663s 00:10:30.013 user 1m23.275s 00:10:30.013 sys 0m13.135s 00:10:30.013 14:15:54 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.013 14:15:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 ************************************ 00:10:30.013 END TEST blockdev_nvme_gpt 00:10:30.013 ************************************ 00:10:30.272 14:15:54 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:30.272 14:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.272 14:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.272 14:15:54 -- common/autotest_common.sh@10 -- # set +x 00:10:30.272 ************************************ 00:10:30.272 START TEST nvme 00:10:30.272 ************************************ 00:10:30.272 14:15:54 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:30.272 * Looking for test storage... 00:10:30.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:30.272 14:15:55 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.272 14:15:55 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.272 14:15:55 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.532 14:15:55 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.532 14:15:55 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.532 14:15:55 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.532 14:15:55 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.532 14:15:55 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.532 14:15:55 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:30.532 14:15:55 nvme -- scripts/common.sh@345 -- # : 1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.532 14:15:55 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.532 14:15:55 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@353 -- # local d=1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.532 14:15:55 nvme -- scripts/common.sh@355 -- # echo 1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.532 14:15:55 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@353 -- # local d=2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.532 14:15:55 nvme -- scripts/common.sh@355 -- # echo 2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.532 14:15:55 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.532 14:15:55 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.532 14:15:55 nvme -- scripts/common.sh@368 -- # return 0 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 14:15:55 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 14:15:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:31.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.039 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.040 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.040 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.040 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.299 14:15:56 nvme -- nvme/nvme.sh@79 -- # uname 00:10:32.299 14:15:56 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:32.299 14:15:56 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:32.299 14:15:56 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1075 -- # stubpid=65318 00:10:32.299 Waiting for stub to ready for secondary processes... 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65318 ]] 00:10:32.299 14:15:56 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:32.299 [2024-12-10 14:15:56.950904] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:10:32.299 [2024-12-10 14:15:56.951062] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:33.238 14:15:57 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:33.238 14:15:57 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65318 ]] 00:10:33.238 14:15:57 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:34.175 [2024-12-10 14:15:58.650155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.175 [2024-12-10 14:15:58.775497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.175 [2024-12-10 14:15:58.775661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.175 [2024-12-10 14:15:58.775730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.175 [2024-12-10 14:15:58.795811] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:34.175 [2024-12-10 14:15:58.795867] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:34.175 [2024-12-10 14:15:58.811586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:34.175 [2024-12-10 14:15:58.811850] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:34.175 [2024-12-10 14:15:58.817211] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:34.175 [2024-12-10 14:15:58.817487] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:34.175 [2024-12-10 14:15:58.817587] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:34.175 [2024-12-10 14:15:58.821259] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:34.175 [2024-12-10 14:15:58.821490] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:34.175 [2024-12-10 14:15:58.821579] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:34.175 [2024-12-10 14:15:58.825379] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:34.175 [2024-12-10 14:15:58.825611] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:34.175 [2024-12-10 14:15:58.825725] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:34.175 [2024-12-10 14:15:58.825812] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:34.175 [2024-12-10 14:15:58.825877] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:34.175 14:15:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:34.175 done. 00:10:34.175 14:15:58 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:10:34.175 14:15:58 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:34.175 14:15:58 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:10:34.175 14:15:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.175 14:15:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.175 ************************************ 00:10:34.175 START TEST nvme_reset 00:10:34.175 ************************************ 00:10:34.175 14:15:58 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:34.434 Initializing NVMe Controllers 00:10:34.434 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:34.434 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:34.434 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:34.434 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:34.434 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:34.434 00:10:34.434 real 0m0.315s 00:10:34.434 user 0m0.101s 00:10:34.434 sys 0m0.164s 00:10:34.434 14:15:59 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.434 14:15:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:34.434 ************************************ 00:10:34.434 END TEST nvme_reset 00:10:34.434 ************************************ 00:10:34.694 14:15:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:34.694 14:15:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.694 14:15:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.694 14:15:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.694 ************************************ 00:10:34.694 START TEST nvme_identify 00:10:34.694 ************************************ 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:10:34.694 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:34.694 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:34.694 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:34.694 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:34.694 14:15:59 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:34.694 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:34.956 [2024-12-10 14:15:59.659458] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 65346 terminated unexpected 00:10:34.956 ===================================================== 00:10:34.956 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:34.956 ===================================================== 00:10:34.956 Controller Capabilities/Features 00:10:34.956 ================================ 00:10:34.956 Vendor ID: 1b36 00:10:34.956 Subsystem Vendor ID: 1af4 00:10:34.956 Serial Number: 12340 00:10:34.956 Model Number: QEMU NVMe Ctrl 00:10:34.956 Firmware Version: 8.0.0 00:10:34.956 Recommended Arb Burst: 6 00:10:34.956 IEEE OUI Identifier: 00 54 52 00:10:34.956 Multi-path I/O 00:10:34.956 May have multiple subsystem ports: No 00:10:34.956 May have multiple controllers: No 00:10:34.956 Associated with SR-IOV VF: No 00:10:34.956 Max Data Transfer Size: 524288 00:10:34.956 Max Number of Namespaces: 256 00:10:34.956 Max Number of I/O Queues: 64 00:10:34.956 NVMe Specification Version (VS): 1.4 00:10:34.956 NVMe Specification Version (Identify): 1.4 00:10:34.956 Maximum Queue Entries: 2048 00:10:34.956 Contiguous Queues Required: Yes 00:10:34.956 Arbitration Mechanisms Supported 00:10:34.956 Weighted Round Robin: Not Supported 00:10:34.956 Vendor Specific: Not Supported 00:10:34.956 Reset Timeout: 7500 ms 00:10:34.956 Doorbell Stride: 4 bytes 00:10:34.956 NVM Subsystem Reset: Not Supported 00:10:34.956 Command Sets Supported 00:10:34.956 NVM Command Set: Supported 00:10:34.956 Boot Partition: Not Supported 00:10:34.956 Memory Page Size Minimum: 4096 bytes 00:10:34.956 Memory Page Size Maximum: 65536 bytes 00:10:34.956 Persistent Memory Region: Not Supported 00:10:34.956 Optional Asynchronous Events Supported 00:10:34.956 Namespace Attribute Notices: Supported 00:10:34.956 Firmware Activation Notices: Not Supported 00:10:34.956 ANA Change Notices: Not Supported 00:10:34.956 PLE Aggregate Log Change Notices: Not Supported 00:10:34.956 LBA Status Info Alert Notices: Not Supported 00:10:34.956 EGE Aggregate Log Change Notices: Not Supported 00:10:34.956 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.956 Zone Descriptor Change Notices: Not Supported 00:10:34.956 Discovery Log Change Notices: Not Supported 00:10:34.956 Controller Attributes 00:10:34.956 128-bit Host Identifier: Not Supported 00:10:34.956 Non-Operational Permissive Mode: Not Supported 00:10:34.956 NVM Sets: Not Supported 00:10:34.956 Read Recovery Levels: Not Supported 00:10:34.956 Endurance Groups: Not Supported 00:10:34.956 Predictable Latency Mode: Not Supported 00:10:34.956 Traffic Based Keep ALive: Not Supported 00:10:34.956 Namespace Granularity: Not Supported 00:10:34.956 SQ Associations: Not Supported 00:10:34.956 UUID List: Not Supported 00:10:34.956 Multi-Domain Subsystem: Not Supported 00:10:34.956 Fixed Capacity Management: Not Supported 00:10:34.956 Variable Capacity Management: Not Supported 00:10:34.956 Delete Endurance Group: Not Supported 00:10:34.956 Delete NVM Set: Not Supported 00:10:34.956 Extended LBA Formats Supported: Supported 00:10:34.956 Flexible Data Placement Supported: Not Supported 00:10:34.956 00:10:34.956 Controller Memory Buffer Support 00:10:34.956 ================================ 00:10:34.956 Supported: No 00:10:34.956 00:10:34.956 Persistent Memory Region Support 00:10:34.956 ================================ 00:10:34.956 Supported: No 00:10:34.956 00:10:34.956 Admin Command Set Attributes 00:10:34.956 ============================ 00:10:34.956 Security Send/Receive: Not Supported 00:10:34.956 Format NVM: Supported 00:10:34.956 Firmware Activate/Download: Not Supported 00:10:34.956 Namespace Management: Supported 00:10:34.956 Device Self-Test: Not Supported 00:10:34.956 Directives: Supported 00:10:34.956 NVMe-MI: Not Supported 00:10:34.956 Virtualization Management: Not Supported 00:10:34.956 Doorbell Buffer Config: Supported 00:10:34.956 Get LBA Status Capability: Not Supported 00:10:34.956 Command & Feature Lockdown Capability: Not Supported 00:10:34.956 Abort Command Limit: 4 00:10:34.956 Async Event Request Limit: 4 00:10:34.956 Number of Firmware Slots: N/A 00:10:34.956 Firmware Slot 1 Read-Only: N/A 00:10:34.956 Firmware Activation Without Reset: N/A 00:10:34.956 Multiple Update Detection Support: N/A 00:10:34.956 Firmware Update Granularity: No Information Provided 00:10:34.956 Per-Namespace SMART Log: Yes 00:10:34.956 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.956 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:34.956 Command Effects Log Page: Supported 00:10:34.956 Get Log Page Extended Data: Supported 00:10:34.956 Telemetry Log Pages: Not Supported 00:10:34.956 Persistent Event Log Pages: Not Supported 00:10:34.956 Supported Log Pages Log Page: May Support 00:10:34.956 Commands Supported & Effects Log Page: Not Supported 00:10:34.956 Feature Identifiers & Effects Log Page:May Support 00:10:34.956 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.956 Data Area 4 for Telemetry Log: Not Supported 00:10:34.956 Error Log Page Entries Supported: 1 00:10:34.956 Keep Alive: Not Supported 00:10:34.956 00:10:34.956 NVM Command Set Attributes 00:10:34.956 ========================== 00:10:34.956 Submission Queue Entry Size 00:10:34.956 Max: 64 00:10:34.956 Min: 64 00:10:34.956 Completion Queue Entry Size 00:10:34.956 Max: 16 00:10:34.956 Min: 16 00:10:34.956 Number of Namespaces: 256 00:10:34.956 Compare Command: Supported 00:10:34.956 Write Uncorrectable Command: Not Supported 00:10:34.956 Dataset Management Command: Supported 00:10:34.956 Write Zeroes Command: Supported 00:10:34.956 Set Features Save Field: Supported 00:10:34.956 Reservations: Not Supported 00:10:34.956 Timestamp: Supported 00:10:34.956 Copy: Supported 00:10:34.956 Volatile Write Cache: Present 00:10:34.956 Atomic Write Unit (Normal): 1 00:10:34.956 Atomic Write Unit (PFail): 1 00:10:34.956 Atomic Compare & Write Unit: 1 00:10:34.956 Fused Compare & Write: Not Supported 00:10:34.956 Scatter-Gather List 00:10:34.956 SGL Command Set: Supported 00:10:34.956 SGL Keyed: Not Supported 00:10:34.956 SGL Bit Bucket Descriptor: Not Supported 00:10:34.956 SGL Metadata Pointer: Not Supported 00:10:34.956 Oversized SGL: Not Supported 00:10:34.956 SGL Metadata Address: Not Supported 00:10:34.956 SGL Offset: Not Supported 00:10:34.956 Transport SGL Data Block: Not Supported 00:10:34.956 Replay Protected Memory Block: Not Supported 00:10:34.956 00:10:34.956 Firmware Slot Information 00:10:34.956 ========================= 00:10:34.956 Active slot: 1 00:10:34.956 Slot 1 Firmware Revision: 1.0 00:10:34.956 00:10:34.956 00:10:34.956 Commands Supported and Effects 00:10:34.956 ============================== 00:10:34.956 Admin Commands 00:10:34.956 -------------- 00:10:34.956 Delete I/O Submission Queue (00h): Supported 00:10:34.956 Create I/O Submission Queue (01h): Supported 00:10:34.956 Get Log Page (02h): Supported 00:10:34.956 Delete I/O Completion Queue (04h): Supported 00:10:34.956 Create I/O Completion Queue (05h): Supported 00:10:34.956 Identify (06h): Supported 00:10:34.956 Abort (08h): Supported 00:10:34.956 Set Features (09h): Supported 00:10:34.956 Get Features (0Ah): Supported 00:10:34.956 Asynchronous Event Request (0Ch): Supported 00:10:34.956 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.956 Directive Send (19h): Supported 00:10:34.956 Directive Receive (1Ah): Supported 00:10:34.956 Virtualization Management (1Ch): Supported 00:10:34.956 Doorbell Buffer Config (7Ch): Supported 00:10:34.956 Format NVM (80h): Supported LBA-Change 00:10:34.956 I/O Commands 00:10:34.956 ------------ 00:10:34.956 Flush (00h): Supported LBA-Change 00:10:34.956 Write (01h): Supported LBA-Change 00:10:34.956 Read (02h): Supported 00:10:34.956 Compare (05h): Supported 00:10:34.956 Write Zeroes (08h): Supported LBA-Change 00:10:34.956 Dataset Management (09h): Supported LBA-Change 00:10:34.956 Unknown (0Ch): Supported 00:10:34.956 Unknown (12h): Supported 00:10:34.956 Copy (19h): Supported LBA-Change 00:10:34.956 Unknown (1Dh): Supported LBA-Change 00:10:34.956 00:10:34.956 Error Log 00:10:34.956 ========= 00:10:34.956 00:10:34.956 Arbitration 00:10:34.956 =========== 00:10:34.956 Arbitration Burst: no limit 00:10:34.956 00:10:34.956 Power Management 00:10:34.956 ================ 00:10:34.956 Number of Power States: 1 00:10:34.956 Current Power State: Power State #0 00:10:34.956 Power State #0: 00:10:34.956 Max Power: 25.00 W 00:10:34.956 Non-Operational State: Operational 00:10:34.957 Entry Latency: 16 microseconds 00:10:34.957 Exit Latency: 4 microseconds 00:10:34.957 Relative Read Throughput: 0 00:10:34.957 Relative Read Latency: 0 00:10:34.957 Relative Write Throughput: 0 00:10:34.957 Relative Write Latency: 0 00:10:34.957 Idle Power[2024-12-10 14:15:59.660640] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 65346 terminated unexpected 00:10:34.957 : Not Reported 00:10:34.957 Active Power: Not Reported 00:10:34.957 Non-Operational Permissive Mode: Not Supported 00:10:34.957 00:10:34.957 Health Information 00:10:34.957 ================== 00:10:34.957 Critical Warnings: 00:10:34.957 Available Spare Space: OK 00:10:34.957 Temperature: OK 00:10:34.957 Device Reliability: OK 00:10:34.957 Read Only: No 00:10:34.957 Volatile Memory Backup: OK 00:10:34.957 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.957 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.957 Available Spare: 0% 00:10:34.957 Available Spare Threshold: 0% 00:10:34.957 Life Percentage Used: 0% 00:10:34.957 Data Units Read: 750 00:10:34.957 Data Units Written: 679 00:10:34.957 Host Read Commands: 32942 00:10:34.957 Host Write Commands: 32728 00:10:34.957 Controller Busy Time: 0 minutes 00:10:34.957 Power Cycles: 0 00:10:34.957 Power On Hours: 0 hours 00:10:34.957 Unsafe Shutdowns: 0 00:10:34.957 Unrecoverable Media Errors: 0 00:10:34.957 Lifetime Error Log Entries: 0 00:10:34.957 Warning Temperature Time: 0 minutes 00:10:34.957 Critical Temperature Time: 0 minutes 00:10:34.957 00:10:34.957 Number of Queues 00:10:34.957 ================ 00:10:34.957 Number of I/O Submission Queues: 64 00:10:34.957 Number of I/O Completion Queues: 64 00:10:34.957 00:10:34.957 ZNS Specific Controller Data 00:10:34.957 ============================ 00:10:34.957 Zone Append Size Limit: 0 00:10:34.957 00:10:34.957 00:10:34.957 Active Namespaces 00:10:34.957 ================= 00:10:34.957 Namespace ID:1 00:10:34.957 Error Recovery Timeout: Unlimited 00:10:34.957 Command Set Identifier: NVM (00h) 00:10:34.957 Deallocate: Supported 00:10:34.957 Deallocated/Unwritten Error: Supported 00:10:34.957 Deallocated Read Value: All 0x00 00:10:34.957 Deallocate in Write Zeroes: Not Supported 00:10:34.957 Deallocated Guard Field: 0xFFFF 00:10:34.957 Flush: Supported 00:10:34.957 Reservation: Not Supported 00:10:34.957 Metadata Transferred as: Separate Metadata Buffer 00:10:34.957 Namespace Sharing Capabilities: Private 00:10:34.957 Size (in LBAs): 1548666 (5GiB) 00:10:34.957 Capacity (in LBAs): 1548666 (5GiB) 00:10:34.957 Utilization (in LBAs): 1548666 (5GiB) 00:10:34.957 Thin Provisioning: Not Supported 00:10:34.957 Per-NS Atomic Units: No 00:10:34.957 Maximum Single Source Range Length: 128 00:10:34.957 Maximum Copy Length: 128 00:10:34.957 Maximum Source Range Count: 128 00:10:34.957 NGUID/EUI64 Never Reused: No 00:10:34.957 Namespace Write Protected: No 00:10:34.957 Number of LBA Formats: 8 00:10:34.957 Current LBA Format: LBA Format #07 00:10:34.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.957 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.957 00:10:34.957 NVM Specific Namespace Data 00:10:34.957 =========================== 00:10:34.957 Logical Block Storage Tag Mask: 0 00:10:34.957 Protection Information Capabilities: 00:10:34.957 16b Guard Protection Information Storage Tag Support: No 00:10:34.957 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.957 Storage Tag Check Read Support: No 00:10:34.957 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.957 ===================================================== 00:10:34.957 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:34.957 ===================================================== 00:10:34.957 Controller Capabilities/Features 00:10:34.957 ================================ 00:10:34.957 Vendor ID: 1b36 00:10:34.957 Subsystem Vendor ID: 1af4 00:10:34.957 Serial Number: 12341 00:10:34.957 Model Number: QEMU NVMe Ctrl 00:10:34.957 Firmware Version: 8.0.0 00:10:34.957 Recommended Arb Burst: 6 00:10:34.957 IEEE OUI Identifier: 00 54 52 00:10:34.957 Multi-path I/O 00:10:34.957 May have multiple subsystem ports: No 00:10:34.957 May have multiple controllers: No 00:10:34.957 Associated with SR-IOV VF: No 00:10:34.957 Max Data Transfer Size: 524288 00:10:34.957 Max Number of Namespaces: 256 00:10:34.957 Max Number of I/O Queues: 64 00:10:34.957 NVMe Specification Version (VS): 1.4 00:10:34.957 NVMe Specification Version (Identify): 1.4 00:10:34.957 Maximum Queue Entries: 2048 00:10:34.957 Contiguous Queues Required: Yes 00:10:34.957 Arbitration Mechanisms Supported 00:10:34.957 Weighted Round Robin: Not Supported 00:10:34.957 Vendor Specific: Not Supported 00:10:34.957 Reset Timeout: 7500 ms 00:10:34.957 Doorbell Stride: 4 bytes 00:10:34.957 NVM Subsystem Reset: Not Supported 00:10:34.957 Command Sets Supported 00:10:34.957 NVM Command Set: Supported 00:10:34.957 Boot Partition: Not Supported 00:10:34.957 Memory Page Size Minimum: 4096 bytes 00:10:34.957 Memory Page Size Maximum: 65536 bytes 00:10:34.957 Persistent Memory Region: Not Supported 00:10:34.957 Optional Asynchronous Events Supported 00:10:34.957 Namespace Attribute Notices: Supported 00:10:34.957 Firmware Activation Notices: Not Supported 00:10:34.957 ANA Change Notices: Not Supported 00:10:34.957 PLE Aggregate Log Change Notices: Not Supported 00:10:34.957 LBA Status Info Alert Notices: Not Supported 00:10:34.957 EGE Aggregate Log Change Notices: Not Supported 00:10:34.957 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.957 Zone Descriptor Change Notices: Not Supported 00:10:34.957 Discovery Log Change Notices: Not Supported 00:10:34.957 Controller Attributes 00:10:34.957 128-bit Host Identifier: Not Supported 00:10:34.957 Non-Operational Permissive Mode: Not Supported 00:10:34.957 NVM Sets: Not Supported 00:10:34.957 Read Recovery Levels: Not Supported 00:10:34.957 Endurance Groups: Not Supported 00:10:34.957 Predictable Latency Mode: Not Supported 00:10:34.957 Traffic Based Keep ALive: Not Supported 00:10:34.957 Namespace Granularity: Not Supported 00:10:34.957 SQ Associations: Not Supported 00:10:34.957 UUID List: Not Supported 00:10:34.957 Multi-Domain Subsystem: Not Supported 00:10:34.957 Fixed Capacity Management: Not Supported 00:10:34.957 Variable Capacity Management: Not Supported 00:10:34.957 Delete Endurance Group: Not Supported 00:10:34.957 Delete NVM Set: Not Supported 00:10:34.957 Extended LBA Formats Supported: Supported 00:10:34.957 Flexible Data Placement Supported: Not Supported 00:10:34.957 00:10:34.957 Controller Memory Buffer Support 00:10:34.957 ================================ 00:10:34.957 Supported: No 00:10:34.957 00:10:34.957 Persistent Memory Region Support 00:10:34.957 ================================ 00:10:34.957 Supported: No 00:10:34.957 00:10:34.957 Admin Command Set Attributes 00:10:34.957 ============================ 00:10:34.957 Security Send/Receive: Not Supported 00:10:34.957 Format NVM: Supported 00:10:34.957 Firmware Activate/Download: Not Supported 00:10:34.957 Namespace Management: Supported 00:10:34.957 Device Self-Test: Not Supported 00:10:34.957 Directives: Supported 00:10:34.957 NVMe-MI: Not Supported 00:10:34.957 Virtualization Management: Not Supported 00:10:34.957 Doorbell Buffer Config: Supported 00:10:34.957 Get LBA Status Capability: Not Supported 00:10:34.957 Command & Feature Lockdown Capability: Not Supported 00:10:34.957 Abort Command Limit: 4 00:10:34.957 Async Event Request Limit: 4 00:10:34.957 Number of Firmware Slots: N/A 00:10:34.957 Firmware Slot 1 Read-Only: N/A 00:10:34.957 Firmware Activation Without Reset: N/A 00:10:34.957 Multiple Update Detection Support: N/A 00:10:34.957 Firmware Update Granularity: No Information Provided 00:10:34.957 Per-Namespace SMART Log: Yes 00:10:34.957 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.957 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:34.957 Command Effects Log Page: Supported 00:10:34.957 Get Log Page Extended Data: Supported 00:10:34.957 Telemetry Log Pages: Not Supported 00:10:34.957 Persistent Event Log Pages: Not Supported 00:10:34.957 Supported Log Pages Log Page: May Support 00:10:34.957 Commands Supported & Effects Log Page: Not Supported 00:10:34.957 Feature Identifiers & Effects Log Page:May Support 00:10:34.958 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.958 Data Area 4 for Telemetry Log: Not Supported 00:10:34.958 Error Log Page Entries Supported: 1 00:10:34.958 Keep Alive: Not Supported 00:10:34.958 00:10:34.958 NVM Command Set Attributes 00:10:34.958 ========================== 00:10:34.958 Submission Queue Entry Size 00:10:34.958 Max: 64 00:10:34.958 Min: 64 00:10:34.958 Completion Queue Entry Size 00:10:34.958 Max: 16 00:10:34.958 Min: 16 00:10:34.958 Number of Namespaces: 256 00:10:34.958 Compare Command: Supported 00:10:34.958 Write Uncorrectable Command: Not Supported 00:10:34.958 Dataset Management Command: Supported 00:10:34.958 Write Zeroes Command: Supported 00:10:34.958 Set Features Save Field: Supported 00:10:34.958 Reservations: Not Supported 00:10:34.958 Timestamp: Supported 00:10:34.958 Copy: Supported 00:10:34.958 Volatile Write Cache: Present 00:10:34.958 Atomic Write Unit (Normal): 1 00:10:34.958 Atomic Write Unit (PFail): 1 00:10:34.958 Atomic Compare & Write Unit: 1 00:10:34.958 Fused Compare & Write: Not Supported 00:10:34.958 Scatter-Gather List 00:10:34.958 SGL Command Set: Supported 00:10:34.958 SGL Keyed: Not Supported 00:10:34.958 SGL Bit Bucket Descriptor: Not Supported 00:10:34.958 SGL Metadata Pointer: Not Supported 00:10:34.958 Oversized SGL: Not Supported 00:10:34.958 SGL Metadata Address: Not Supported 00:10:34.958 SGL Offset: Not Supported 00:10:34.958 Transport SGL Data Block: Not Supported 00:10:34.958 Replay Protected Memory Block: Not Supported 00:10:34.958 00:10:34.958 Firmware Slot Information 00:10:34.958 ========================= 00:10:34.958 Active slot: 1 00:10:34.958 Slot 1 Firmware Revision: 1.0 00:10:34.958 00:10:34.958 00:10:34.958 Commands Supported and Effects 00:10:34.958 ============================== 00:10:34.958 Admin Commands 00:10:34.958 -------------- 00:10:34.958 Delete I/O Submission Queue (00h): Supported 00:10:34.958 Create I/O Submission Queue (01h): Supported 00:10:34.958 Get Log Page (02h): Supported 00:10:34.958 Delete I/O Completion Queue (04h): Supported 00:10:34.958 Create I/O Completion Queue (05h): Supported 00:10:34.958 Identify (06h): Supported 00:10:34.958 Abort (08h): Supported 00:10:34.958 Set Features (09h): Supported 00:10:34.958 Get Features (0Ah): Supported 00:10:34.958 Asynchronous Event Request (0Ch): Supported 00:10:34.958 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.958 Directive Send (19h): Supported 00:10:34.958 Directive Receive (1Ah): Supported 00:10:34.958 Virtualization Management (1Ch): Supported 00:10:34.958 Doorbell Buffer Config (7Ch): Supported 00:10:34.958 Format NVM (80h): Supported LBA-Change 00:10:34.958 I/O Commands 00:10:34.958 ------------ 00:10:34.958 Flush (00h): Supported LBA-Change 00:10:34.958 Write (01h): Supported LBA-Change 00:10:34.958 Read (02h): Supported 00:10:34.958 Compare (05h): Supported 00:10:34.958 Write Zeroes (08h): Supported LBA-Change 00:10:34.958 Dataset Management (09h): Supported LBA-Change 00:10:34.958 Unknown (0Ch): Supported 00:10:34.958 Unknown (12h): Supported 00:10:34.958 Copy (19h): Supported LBA-Change 00:10:34.958 Unknown (1Dh): Supported LBA-Change 00:10:34.958 00:10:34.958 Error Log 00:10:34.958 ========= 00:10:34.958 00:10:34.958 Arbitration 00:10:34.958 =========== 00:10:34.958 Arbitration Burst: no limit 00:10:34.958 00:10:34.958 Power Management 00:10:34.958 ================ 00:10:34.958 Number of Power States: 1 00:10:34.958 Current Power State: Power State #0 00:10:34.958 Power State #0: 00:10:34.958 Max Power: 25.00 W 00:10:34.958 Non-Operational State: Operational 00:10:34.958 Entry Latency: 16 microseconds 00:10:34.958 Exit Latency: 4 microseconds 00:10:34.958 Relative Read Throughput: 0 00:10:34.958 Relative Read Latency: 0 00:10:34.958 Relative Write Throughput: 0 00:10:34.958 Relative Write Latency: 0 00:10:34.958 Idle Power: Not Reported 00:10:34.958 Active Power: Not Reported 00:10:34.958 Non-Operational Permissive Mode: Not Supported 00:10:34.958 00:10:34.958 Health Information 00:10:34.958 ================== 00:10:34.958 Critical Warnings: 00:10:34.958 Available Spare Space: OK 00:10:34.958 Temperature: [2024-12-10 14:15:59.661513] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 65346 terminated unexpected 00:10:34.958 OK 00:10:34.958 Device Reliability: OK 00:10:34.958 Read Only: No 00:10:34.958 Volatile Memory Backup: OK 00:10:34.958 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.958 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.958 Available Spare: 0% 00:10:34.958 Available Spare Threshold: 0% 00:10:34.958 Life Percentage Used: 0% 00:10:34.958 Data Units Read: 1161 00:10:34.958 Data Units Written: 1028 00:10:34.958 Host Read Commands: 49687 00:10:34.958 Host Write Commands: 48482 00:10:34.958 Controller Busy Time: 0 minutes 00:10:34.958 Power Cycles: 0 00:10:34.958 Power On Hours: 0 hours 00:10:34.958 Unsafe Shutdowns: 0 00:10:34.958 Unrecoverable Media Errors: 0 00:10:34.958 Lifetime Error Log Entries: 0 00:10:34.958 Warning Temperature Time: 0 minutes 00:10:34.958 Critical Temperature Time: 0 minutes 00:10:34.958 00:10:34.958 Number of Queues 00:10:34.958 ================ 00:10:34.958 Number of I/O Submission Queues: 64 00:10:34.958 Number of I/O Completion Queues: 64 00:10:34.958 00:10:34.958 ZNS Specific Controller Data 00:10:34.958 ============================ 00:10:34.958 Zone Append Size Limit: 0 00:10:34.958 00:10:34.958 00:10:34.958 Active Namespaces 00:10:34.958 ================= 00:10:34.958 Namespace ID:1 00:10:34.958 Error Recovery Timeout: Unlimited 00:10:34.958 Command Set Identifier: NVM (00h) 00:10:34.958 Deallocate: Supported 00:10:34.958 Deallocated/Unwritten Error: Supported 00:10:34.958 Deallocated Read Value: All 0x00 00:10:34.958 Deallocate in Write Zeroes: Not Supported 00:10:34.958 Deallocated Guard Field: 0xFFFF 00:10:34.958 Flush: Supported 00:10:34.958 Reservation: Not Supported 00:10:34.958 Namespace Sharing Capabilities: Private 00:10:34.958 Size (in LBAs): 1310720 (5GiB) 00:10:34.958 Capacity (in LBAs): 1310720 (5GiB) 00:10:34.958 Utilization (in LBAs): 1310720 (5GiB) 00:10:34.958 Thin Provisioning: Not Supported 00:10:34.958 Per-NS Atomic Units: No 00:10:34.958 Maximum Single Source Range Length: 128 00:10:34.958 Maximum Copy Length: 128 00:10:34.958 Maximum Source Range Count: 128 00:10:34.958 NGUID/EUI64 Never Reused: No 00:10:34.958 Namespace Write Protected: No 00:10:34.958 Number of LBA Formats: 8 00:10:34.958 Current LBA Format: LBA Format #04 00:10:34.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.958 00:10:34.958 NVM Specific Namespace Data 00:10:34.958 =========================== 00:10:34.958 Logical Block Storage Tag Mask: 0 00:10:34.958 Protection Information Capabilities: 00:10:34.958 16b Guard Protection Information Storage Tag Support: No 00:10:34.958 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.958 Storage Tag Check Read Support: No 00:10:34.958 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.958 ===================================================== 00:10:34.958 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:34.958 ===================================================== 00:10:34.958 Controller Capabilities/Features 00:10:34.958 ================================ 00:10:34.958 Vendor ID: 1b36 00:10:34.958 Subsystem Vendor ID: 1af4 00:10:34.958 Serial Number: 12343 00:10:34.958 Model Number: QEMU NVMe Ctrl 00:10:34.958 Firmware Version: 8.0.0 00:10:34.958 Recommended Arb Burst: 6 00:10:34.958 IEEE OUI Identifier: 00 54 52 00:10:34.958 Multi-path I/O 00:10:34.958 May have multiple subsystem ports: No 00:10:34.958 May have multiple controllers: Yes 00:10:34.958 Associated with SR-IOV VF: No 00:10:34.958 Max Data Transfer Size: 524288 00:10:34.958 Max Number of Namespaces: 256 00:10:34.958 Max Number of I/O Queues: 64 00:10:34.959 NVMe Specification Version (VS): 1.4 00:10:34.959 NVMe Specification Version (Identify): 1.4 00:10:34.959 Maximum Queue Entries: 2048 00:10:34.959 Contiguous Queues Required: Yes 00:10:34.959 Arbitration Mechanisms Supported 00:10:34.959 Weighted Round Robin: Not Supported 00:10:34.959 Vendor Specific: Not Supported 00:10:34.959 Reset Timeout: 7500 ms 00:10:34.959 Doorbell Stride: 4 bytes 00:10:34.959 NVM Subsystem Reset: Not Supported 00:10:34.959 Command Sets Supported 00:10:34.959 NVM Command Set: Supported 00:10:34.959 Boot Partition: Not Supported 00:10:34.959 Memory Page Size Minimum: 4096 bytes 00:10:34.959 Memory Page Size Maximum: 65536 bytes 00:10:34.959 Persistent Memory Region: Not Supported 00:10:34.959 Optional Asynchronous Events Supported 00:10:34.959 Namespace Attribute Notices: Supported 00:10:34.959 Firmware Activation Notices: Not Supported 00:10:34.959 ANA Change Notices: Not Supported 00:10:34.959 PLE Aggregate Log Change Notices: Not Supported 00:10:34.959 LBA Status Info Alert Notices: Not Supported 00:10:34.959 EGE Aggregate Log Change Notices: Not Supported 00:10:34.959 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.959 Zone Descriptor Change Notices: Not Supported 00:10:34.959 Discovery Log Change Notices: Not Supported 00:10:34.959 Controller Attributes 00:10:34.959 128-bit Host Identifier: Not Supported 00:10:34.959 Non-Operational Permissive Mode: Not Supported 00:10:34.959 NVM Sets: Not Supported 00:10:34.959 Read Recovery Levels: Not Supported 00:10:34.959 Endurance Groups: Supported 00:10:34.959 Predictable Latency Mode: Not Supported 00:10:34.959 Traffic Based Keep ALive: Not Supported 00:10:34.959 Namespace Granularity: Not Supported 00:10:34.959 SQ Associations: Not Supported 00:10:34.959 UUID List: Not Supported 00:10:34.959 Multi-Domain Subsystem: Not Supported 00:10:34.959 Fixed Capacity Management: Not Supported 00:10:34.959 Variable Capacity Management: Not Supported 00:10:34.959 Delete Endurance Group: Not Supported 00:10:34.959 Delete NVM Set: Not Supported 00:10:34.959 Extended LBA Formats Supported: Supported 00:10:34.959 Flexible Data Placement Supported: Supported 00:10:34.959 00:10:34.959 Controller Memory Buffer Support 00:10:34.959 ================================ 00:10:34.959 Supported: No 00:10:34.959 00:10:34.959 Persistent Memory Region Support 00:10:34.959 ================================ 00:10:34.959 Supported: No 00:10:34.959 00:10:34.959 Admin Command Set Attributes 00:10:34.959 ============================ 00:10:34.959 Security Send/Receive: Not Supported 00:10:34.959 Format NVM: Supported 00:10:34.959 Firmware Activate/Download: Not Supported 00:10:34.959 Namespace Management: Supported 00:10:34.959 Device Self-Test: Not Supported 00:10:34.959 Directives: Supported 00:10:34.959 NVMe-MI: Not Supported 00:10:34.959 Virtualization Management: Not Supported 00:10:34.959 Doorbell Buffer Config: Supported 00:10:34.959 Get LBA Status Capability: Not Supported 00:10:34.959 Command & Feature Lockdown Capability: Not Supported 00:10:34.959 Abort Command Limit: 4 00:10:34.959 Async Event Request Limit: 4 00:10:34.959 Number of Firmware Slots: N/A 00:10:34.959 Firmware Slot 1 Read-Only: N/A 00:10:34.959 Firmware Activation Without Reset: N/A 00:10:34.959 Multiple Update Detection Support: N/A 00:10:34.959 Firmware Update Granularity: No Information Provided 00:10:34.959 Per-Namespace SMART Log: Yes 00:10:34.959 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.959 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:34.959 Command Effects Log Page: Supported 00:10:34.959 Get Log Page Extended Data: Supported 00:10:34.959 Telemetry Log Pages: Not Supported 00:10:34.959 Persistent Event Log Pages: Not Supported 00:10:34.959 Supported Log Pages Log Page: May Support 00:10:34.959 Commands Supported & Effects Log Page: Not Supported 00:10:34.959 Feature Identifiers & Effects Log Page:May Support 00:10:34.959 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.959 Data Area 4 for Telemetry Log: Not Supported 00:10:34.959 Error Log Page Entries Supported: 1 00:10:34.959 Keep Alive: Not Supported 00:10:34.959 00:10:34.959 NVM Command Set Attributes 00:10:34.959 ========================== 00:10:34.959 Submission Queue Entry Size 00:10:34.959 Max: 64 00:10:34.959 Min: 64 00:10:34.959 Completion Queue Entry Size 00:10:34.959 Max: 16 00:10:34.959 Min: 16 00:10:34.959 Number of Namespaces: 256 00:10:34.959 Compare Command: Supported 00:10:34.959 Write Uncorrectable Command: Not Supported 00:10:34.959 Dataset Management Command: Supported 00:10:34.959 Write Zeroes Command: Supported 00:10:34.959 Set Features Save Field: Supported 00:10:34.959 Reservations: Not Supported 00:10:34.959 Timestamp: Supported 00:10:34.959 Copy: Supported 00:10:34.959 Volatile Write Cache: Present 00:10:34.959 Atomic Write Unit (Normal): 1 00:10:34.959 Atomic Write Unit (PFail): 1 00:10:34.959 Atomic Compare & Write Unit: 1 00:10:34.959 Fused Compare & Write: Not Supported 00:10:34.959 Scatter-Gather List 00:10:34.959 SGL Command Set: Supported 00:10:34.959 SGL Keyed: Not Supported 00:10:34.959 SGL Bit Bucket Descriptor: Not Supported 00:10:34.959 SGL Metadata Pointer: Not Supported 00:10:34.959 Oversized SGL: Not Supported 00:10:34.959 SGL Metadata Address: Not Supported 00:10:34.959 SGL Offset: Not Supported 00:10:34.959 Transport SGL Data Block: Not Supported 00:10:34.959 Replay Protected Memory Block: Not Supported 00:10:34.959 00:10:34.959 Firmware Slot Information 00:10:34.959 ========================= 00:10:34.959 Active slot: 1 00:10:34.959 Slot 1 Firmware Revision: 1.0 00:10:34.959 00:10:34.959 00:10:34.959 Commands Supported and Effects 00:10:34.959 ============================== 00:10:34.959 Admin Commands 00:10:34.959 -------------- 00:10:34.959 Delete I/O Submission Queue (00h): Supported 00:10:34.959 Create I/O Submission Queue (01h): Supported 00:10:34.959 Get Log Page (02h): Supported 00:10:34.959 Delete I/O Completion Queue (04h): Supported 00:10:34.959 Create I/O Completion Queue (05h): Supported 00:10:34.959 Identify (06h): Supported 00:10:34.959 Abort (08h): Supported 00:10:34.959 Set Features (09h): Supported 00:10:34.959 Get Features (0Ah): Supported 00:10:34.959 Asynchronous Event Request (0Ch): Supported 00:10:34.959 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.959 Directive Send (19h): Supported 00:10:34.959 Directive Receive (1Ah): Supported 00:10:34.959 Virtualization Management (1Ch): Supported 00:10:34.959 Doorbell Buffer Config (7Ch): Supported 00:10:34.959 Format NVM (80h): Supported LBA-Change 00:10:34.959 I/O Commands 00:10:34.959 ------------ 00:10:34.959 Flush (00h): Supported LBA-Change 00:10:34.959 Write (01h): Supported LBA-Change 00:10:34.959 Read (02h): Supported 00:10:34.959 Compare (05h): Supported 00:10:34.959 Write Zeroes (08h): Supported LBA-Change 00:10:34.959 Dataset Management (09h): Supported LBA-Change 00:10:34.959 Unknown (0Ch): Supported 00:10:34.959 Unknown (12h): Supported 00:10:34.959 Copy (19h): Supported LBA-Change 00:10:34.959 Unknown (1Dh): Supported LBA-Change 00:10:34.959 00:10:34.959 Error Log 00:10:34.959 ========= 00:10:34.959 00:10:34.959 Arbitration 00:10:34.959 =========== 00:10:34.959 Arbitration Burst: no limit 00:10:34.959 00:10:34.959 Power Management 00:10:34.959 ================ 00:10:34.959 Number of Power States: 1 00:10:34.959 Current Power State: Power State #0 00:10:34.959 Power State #0: 00:10:34.959 Max Power: 25.00 W 00:10:34.959 Non-Operational State: Operational 00:10:34.959 Entry Latency: 16 microseconds 00:10:34.959 Exit Latency: 4 microseconds 00:10:34.959 Relative Read Throughput: 0 00:10:34.959 Relative Read Latency: 0 00:10:34.959 Relative Write Throughput: 0 00:10:34.959 Relative Write Latency: 0 00:10:34.959 Idle Power: Not Reported 00:10:34.959 Active Power: Not Reported 00:10:34.959 Non-Operational Permissive Mode: Not Supported 00:10:34.959 00:10:34.959 Health Information 00:10:34.959 ================== 00:10:34.959 Critical Warnings: 00:10:34.959 Available Spare Space: OK 00:10:34.959 Temperature: OK 00:10:34.959 Device Reliability: OK 00:10:34.959 Read Only: No 00:10:34.959 Volatile Memory Backup: OK 00:10:34.959 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.959 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.959 Available Spare: 0% 00:10:34.959 Available Spare Threshold: 0% 00:10:34.959 Life Percentage Used: 0% 00:10:34.959 Data Units Read: 1095 00:10:34.959 Data Units Written: 1024 00:10:34.959 Host Read Commands: 35932 00:10:34.959 Host Write Commands: 35355 00:10:34.959 Controller Busy Time: 0 minutes 00:10:34.959 Power Cycles: 0 00:10:34.959 Power On Hours: 0 hours 00:10:34.959 Unsafe Shutdowns: 0 00:10:34.959 Unrecoverable Media Errors: 0 00:10:34.959 Lifetime Error Log Entries: 0 00:10:34.959 Warning Temperature Time: 0 minutes 00:10:34.959 Critical Temperature Time: 0 minutes 00:10:34.959 00:10:34.959 Number of Queues 00:10:34.959 ================ 00:10:34.959 Number of I/O Submission Queues: 64 00:10:34.960 Number of I/O Completion Queues: 64 00:10:34.960 00:10:34.960 ZNS Specific Controller Data 00:10:34.960 ============================ 00:10:34.960 Zone Append Size Limit: 0 00:10:34.960 00:10:34.960 00:10:34.960 Active Namespaces 00:10:34.960 ================= 00:10:34.960 Namespace ID:1 00:10:34.960 Error Recovery Timeout: Unlimited 00:10:34.960 Command Set Identifier: NVM (00h) 00:10:34.960 Deallocate: Supported 00:10:34.960 Deallocated/Unwritten Error: Supported 00:10:34.960 Deallocated Read Value: All 0x00 00:10:34.960 Deallocate in Write Zeroes: Not Supported 00:10:34.960 Deallocated Guard Field: 0xFFFF 00:10:34.960 Flush: Supported 00:10:34.960 Reservation: Not Supported 00:10:34.960 Namespace Sharing Capabilities: Multiple Controllers 00:10:34.960 Size (in LBAs): 262144 (1GiB) 00:10:34.960 Capacity (in LBAs): 262144 (1GiB) 00:10:34.960 Utilization (in LBAs): 262144 (1GiB) 00:10:34.960 Thin Provisioning: Not Supported 00:10:34.960 Per-NS Atomic Units: No 00:10:34.960 Maximum Single Source Range Length: 128 00:10:34.960 Maximum Copy Length: 128 00:10:34.960 Maximum Source Range Count: 128 00:10:34.960 NGUID/EUI64 Never Reused: No 00:10:34.960 Namespace Write Protected: No 00:10:34.960 Endurance group ID: 1 00:10:34.960 Number of LBA Formats: 8 00:10:34.960 Current LBA Format: LBA Format #04 00:10:34.960 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.960 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.960 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.960 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.960 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.960 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.960 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.960 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.960 00:10:34.960 Get Feature FDP: 00:10:34.960 ================ 00:10:34.960 Enabled: Yes 00:10:34.960 FDP configuration index: 0 00:10:34.960 00:10:34.960 FDP configurations log page 00:10:34.960 =========================== 00:10:34.960 Number of FDP configurations: 1 00:10:34.960 Version: 0 00:10:34.960 Size: 112 00:10:34.960 FDP Configuration Descriptor: 0 00:10:34.960 Descriptor Size: 96 00:10:34.960 Reclaim Group Identifier format: 2 00:10:34.960 FDP Volatile Write Cache: Not Present 00:10:34.960 FDP Configuration: Valid 00:10:34.960 Vendor Specific Size: 0 00:10:34.960 Number of Reclaim Groups: 2 00:10:34.960 Number of Recalim Unit Handles: 8 00:10:34.960 Max Placement Identifiers: 128 00:10:34.960 Number of Namespaces Suppprted: 256 00:10:34.960 Reclaim unit Nominal Size: 6000000 bytes 00:10:34.960 Estimated Reclaim Unit Time Limit: Not Reported 00:10:34.960 RUH Desc #000: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #001: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #002: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #003: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #004: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #005: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #006: RUH Type: Initially Isolated 00:10:34.960 RUH Desc #007: RUH Type: Initially Isolated 00:10:34.960 00:10:34.960 FDP reclaim unit handle usage log page 00:10:34.960 ====================================== 00:10:34.960 Number of Reclaim Unit Handles: 8 00:10:34.960 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:34.960 RUH Usage Desc #001: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #002: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #003: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #004: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #005: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #006: RUH Attributes: Unused 00:10:34.960 RUH Usage Desc #007: RUH Attributes: Unused 00:10:34.960 00:10:34.960 FDP statistics log page 00:10:34.960 ======================= 00:10:34.960 Host bytes with metadata written: 631021568 00:10:34.960 M[2024-12-10 14:15:59.663391] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 65346 terminated unexpected 00:10:34.960 edia bytes with metadata written: 631103488 00:10:34.960 Media bytes erased: 0 00:10:34.960 00:10:34.960 FDP events log page 00:10:34.960 =================== 00:10:34.960 Number of FDP events: 0 00:10:34.960 00:10:34.960 NVM Specific Namespace Data 00:10:34.960 =========================== 00:10:34.960 Logical Block Storage Tag Mask: 0 00:10:34.960 Protection Information Capabilities: 00:10:34.960 16b Guard Protection Information Storage Tag Support: No 00:10:34.960 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.960 Storage Tag Check Read Support: No 00:10:34.960 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.960 ===================================================== 00:10:34.960 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:34.960 ===================================================== 00:10:34.960 Controller Capabilities/Features 00:10:34.960 ================================ 00:10:34.960 Vendor ID: 1b36 00:10:34.960 Subsystem Vendor ID: 1af4 00:10:34.960 Serial Number: 12342 00:10:34.960 Model Number: QEMU NVMe Ctrl 00:10:34.960 Firmware Version: 8.0.0 00:10:34.960 Recommended Arb Burst: 6 00:10:34.960 IEEE OUI Identifier: 00 54 52 00:10:34.960 Multi-path I/O 00:10:34.960 May have multiple subsystem ports: No 00:10:34.960 May have multiple controllers: No 00:10:34.960 Associated with SR-IOV VF: No 00:10:34.960 Max Data Transfer Size: 524288 00:10:34.960 Max Number of Namespaces: 256 00:10:34.960 Max Number of I/O Queues: 64 00:10:34.960 NVMe Specification Version (VS): 1.4 00:10:34.960 NVMe Specification Version (Identify): 1.4 00:10:34.960 Maximum Queue Entries: 2048 00:10:34.960 Contiguous Queues Required: Yes 00:10:34.960 Arbitration Mechanisms Supported 00:10:34.960 Weighted Round Robin: Not Supported 00:10:34.960 Vendor Specific: Not Supported 00:10:34.960 Reset Timeout: 7500 ms 00:10:34.960 Doorbell Stride: 4 bytes 00:10:34.960 NVM Subsystem Reset: Not Supported 00:10:34.960 Command Sets Supported 00:10:34.960 NVM Command Set: Supported 00:10:34.960 Boot Partition: Not Supported 00:10:34.960 Memory Page Size Minimum: 4096 bytes 00:10:34.960 Memory Page Size Maximum: 65536 bytes 00:10:34.960 Persistent Memory Region: Not Supported 00:10:34.960 Optional Asynchronous Events Supported 00:10:34.960 Namespace Attribute Notices: Supported 00:10:34.960 Firmware Activation Notices: Not Supported 00:10:34.960 ANA Change Notices: Not Supported 00:10:34.960 PLE Aggregate Log Change Notices: Not Supported 00:10:34.960 LBA Status Info Alert Notices: Not Supported 00:10:34.960 EGE Aggregate Log Change Notices: Not Supported 00:10:34.960 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.960 Zone Descriptor Change Notices: Not Supported 00:10:34.960 Discovery Log Change Notices: Not Supported 00:10:34.960 Controller Attributes 00:10:34.960 128-bit Host Identifier: Not Supported 00:10:34.960 Non-Operational Permissive Mode: Not Supported 00:10:34.960 NVM Sets: Not Supported 00:10:34.960 Read Recovery Levels: Not Supported 00:10:34.960 Endurance Groups: Not Supported 00:10:34.960 Predictable Latency Mode: Not Supported 00:10:34.960 Traffic Based Keep ALive: Not Supported 00:10:34.960 Namespace Granularity: Not Supported 00:10:34.960 SQ Associations: Not Supported 00:10:34.961 UUID List: Not Supported 00:10:34.961 Multi-Domain Subsystem: Not Supported 00:10:34.961 Fixed Capacity Management: Not Supported 00:10:34.961 Variable Capacity Management: Not Supported 00:10:34.961 Delete Endurance Group: Not Supported 00:10:34.961 Delete NVM Set: Not Supported 00:10:34.961 Extended LBA Formats Supported: Supported 00:10:34.961 Flexible Data Placement Supported: Not Supported 00:10:34.961 00:10:34.961 Controller Memory Buffer Support 00:10:34.961 ================================ 00:10:34.961 Supported: No 00:10:34.961 00:10:34.961 Persistent Memory Region Support 00:10:34.961 ================================ 00:10:34.961 Supported: No 00:10:34.961 00:10:34.961 Admin Command Set Attributes 00:10:34.961 ============================ 00:10:34.961 Security Send/Receive: Not Supported 00:10:34.961 Format NVM: Supported 00:10:34.961 Firmware Activate/Download: Not Supported 00:10:34.961 Namespace Management: Supported 00:10:34.961 Device Self-Test: Not Supported 00:10:34.961 Directives: Supported 00:10:34.961 NVMe-MI: Not Supported 00:10:34.961 Virtualization Management: Not Supported 00:10:34.961 Doorbell Buffer Config: Supported 00:10:34.961 Get LBA Status Capability: Not Supported 00:10:34.961 Command & Feature Lockdown Capability: Not Supported 00:10:34.961 Abort Command Limit: 4 00:10:34.961 Async Event Request Limit: 4 00:10:34.961 Number of Firmware Slots: N/A 00:10:34.961 Firmware Slot 1 Read-Only: N/A 00:10:34.961 Firmware Activation Without Reset: N/A 00:10:34.961 Multiple Update Detection Support: N/A 00:10:34.961 Firmware Update Granularity: No Information Provided 00:10:34.961 Per-Namespace SMART Log: Yes 00:10:34.961 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.961 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:34.961 Command Effects Log Page: Supported 00:10:34.961 Get Log Page Extended Data: Supported 00:10:34.961 Telemetry Log Pages: Not Supported 00:10:34.961 Persistent Event Log Pages: Not Supported 00:10:34.961 Supported Log Pages Log Page: May Support 00:10:34.961 Commands Supported & Effects Log Page: Not Supported 00:10:34.961 Feature Identifiers & Effects Log Page:May Support 00:10:34.961 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.961 Data Area 4 for Telemetry Log: Not Supported 00:10:34.961 Error Log Page Entries Supported: 1 00:10:34.961 Keep Alive: Not Supported 00:10:34.961 00:10:34.961 NVM Command Set Attributes 00:10:34.961 ========================== 00:10:34.961 Submission Queue Entry Size 00:10:34.961 Max: 64 00:10:34.961 Min: 64 00:10:34.961 Completion Queue Entry Size 00:10:34.961 Max: 16 00:10:34.961 Min: 16 00:10:34.961 Number of Namespaces: 256 00:10:34.961 Compare Command: Supported 00:10:34.961 Write Uncorrectable Command: Not Supported 00:10:34.961 Dataset Management Command: Supported 00:10:34.961 Write Zeroes Command: Supported 00:10:34.961 Set Features Save Field: Supported 00:10:34.961 Reservations: Not Supported 00:10:34.961 Timestamp: Supported 00:10:34.961 Copy: Supported 00:10:34.961 Volatile Write Cache: Present 00:10:34.961 Atomic Write Unit (Normal): 1 00:10:34.961 Atomic Write Unit (PFail): 1 00:10:34.961 Atomic Compare & Write Unit: 1 00:10:34.961 Fused Compare & Write: Not Supported 00:10:34.961 Scatter-Gather List 00:10:34.961 SGL Command Set: Supported 00:10:34.961 SGL Keyed: Not Supported 00:10:34.961 SGL Bit Bucket Descriptor: Not Supported 00:10:34.961 SGL Metadata Pointer: Not Supported 00:10:34.961 Oversized SGL: Not Supported 00:10:34.961 SGL Metadata Address: Not Supported 00:10:34.961 SGL Offset: Not Supported 00:10:34.961 Transport SGL Data Block: Not Supported 00:10:34.961 Replay Protected Memory Block: Not Supported 00:10:34.961 00:10:34.961 Firmware Slot Information 00:10:34.961 ========================= 00:10:34.961 Active slot: 1 00:10:34.961 Slot 1 Firmware Revision: 1.0 00:10:34.961 00:10:34.961 00:10:34.961 Commands Supported and Effects 00:10:34.961 ============================== 00:10:34.961 Admin Commands 00:10:34.961 -------------- 00:10:34.961 Delete I/O Submission Queue (00h): Supported 00:10:34.961 Create I/O Submission Queue (01h): Supported 00:10:34.961 Get Log Page (02h): Supported 00:10:34.961 Delete I/O Completion Queue (04h): Supported 00:10:34.961 Create I/O Completion Queue (05h): Supported 00:10:34.961 Identify (06h): Supported 00:10:34.961 Abort (08h): Supported 00:10:34.961 Set Features (09h): Supported 00:10:34.961 Get Features (0Ah): Supported 00:10:34.961 Asynchronous Event Request (0Ch): Supported 00:10:34.961 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.961 Directive Send (19h): Supported 00:10:34.961 Directive Receive (1Ah): Supported 00:10:34.961 Virtualization Management (1Ch): Supported 00:10:34.961 Doorbell Buffer Config (7Ch): Supported 00:10:34.961 Format NVM (80h): Supported LBA-Change 00:10:34.961 I/O Commands 00:10:34.961 ------------ 00:10:34.961 Flush (00h): Supported LBA-Change 00:10:34.961 Write (01h): Supported LBA-Change 00:10:34.961 Read (02h): Supported 00:10:34.961 Compare (05h): Supported 00:10:34.961 Write Zeroes (08h): Supported LBA-Change 00:10:34.961 Dataset Management (09h): Supported LBA-Change 00:10:34.961 Unknown (0Ch): Supported 00:10:34.961 Unknown (12h): Supported 00:10:34.961 Copy (19h): Supported LBA-Change 00:10:34.961 Unknown (1Dh): Supported LBA-Change 00:10:34.961 00:10:34.961 Error Log 00:10:34.961 ========= 00:10:34.961 00:10:34.961 Arbitration 00:10:34.961 =========== 00:10:34.961 Arbitration Burst: no limit 00:10:34.961 00:10:34.961 Power Management 00:10:34.961 ================ 00:10:34.961 Number of Power States: 1 00:10:34.961 Current Power State: Power State #0 00:10:34.961 Power State #0: 00:10:34.961 Max Power: 25.00 W 00:10:34.961 Non-Operational State: Operational 00:10:34.961 Entry Latency: 16 microseconds 00:10:34.961 Exit Latency: 4 microseconds 00:10:34.961 Relative Read Throughput: 0 00:10:34.961 Relative Read Latency: 0 00:10:34.961 Relative Write Throughput: 0 00:10:34.961 Relative Write Latency: 0 00:10:34.961 Idle Power: Not Reported 00:10:34.961 Active Power: Not Reported 00:10:34.961 Non-Operational Permissive Mode: Not Supported 00:10:34.961 00:10:34.961 Health Information 00:10:34.961 ================== 00:10:34.961 Critical Warnings: 00:10:34.961 Available Spare Space: OK 00:10:34.961 Temperature: OK 00:10:34.961 Device Reliability: OK 00:10:34.961 Read Only: No 00:10:34.961 Volatile Memory Backup: OK 00:10:34.961 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.961 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.961 Available Spare: 0% 00:10:34.961 Available Spare Threshold: 0% 00:10:34.961 Life Percentage Used: 0% 00:10:34.961 Data Units Read: 2570 00:10:34.961 Data Units Written: 2357 00:10:34.961 Host Read Commands: 102158 00:10:34.961 Host Write Commands: 100427 00:10:34.961 Controller Busy Time: 0 minutes 00:10:34.961 Power Cycles: 0 00:10:34.961 Power On Hours: 0 hours 00:10:34.961 Unsafe Shutdowns: 0 00:10:34.961 Unrecoverable Media Errors: 0 00:10:34.961 Lifetime Error Log Entries: 0 00:10:34.961 Warning Temperature Time: 0 minutes 00:10:34.961 Critical Temperature Time: 0 minutes 00:10:34.961 00:10:34.961 Number of Queues 00:10:34.961 ================ 00:10:34.961 Number of I/O Submission Queues: 64 00:10:34.961 Number of I/O Completion Queues: 64 00:10:34.961 00:10:34.961 ZNS Specific Controller Data 00:10:34.961 ============================ 00:10:34.961 Zone Append Size Limit: 0 00:10:34.961 00:10:34.961 00:10:34.961 Active Namespaces 00:10:34.961 ================= 00:10:34.961 Namespace ID:1 00:10:34.961 Error Recovery Timeout: Unlimited 00:10:34.961 Command Set Identifier: NVM (00h) 00:10:34.961 Deallocate: Supported 00:10:34.961 Deallocated/Unwritten Error: Supported 00:10:34.961 Deallocated Read Value: All 0x00 00:10:34.961 Deallocate in Write Zeroes: Not Supported 00:10:34.961 Deallocated Guard Field: 0xFFFF 00:10:34.961 Flush: Supported 00:10:34.961 Reservation: Not Supported 00:10:34.961 Namespace Sharing Capabilities: Private 00:10:34.961 Size (in LBAs): 1048576 (4GiB) 00:10:34.961 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.961 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.961 Thin Provisioning: Not Supported 00:10:34.961 Per-NS Atomic Units: No 00:10:34.961 Maximum Single Source Range Length: 128 00:10:34.961 Maximum Copy Length: 128 00:10:34.961 Maximum Source Range Count: 128 00:10:34.961 NGUID/EUI64 Never Reused: No 00:10:34.961 Namespace Write Protected: No 00:10:34.961 Number of LBA Formats: 8 00:10:34.961 Current LBA Format: LBA Format #04 00:10:34.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.961 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.961 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.961 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.961 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.961 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.962 00:10:34.962 NVM Specific Namespace Data 00:10:34.962 =========================== 00:10:34.962 Logical Block Storage Tag Mask: 0 00:10:34.962 Protection Information Capabilities: 00:10:34.962 16b Guard Protection Information Storage Tag Support: No 00:10:34.962 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.962 Storage Tag Check Read Support: No 00:10:34.962 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Namespace ID:2 00:10:34.962 Error Recovery Timeout: Unlimited 00:10:34.962 Command Set Identifier: NVM (00h) 00:10:34.962 Deallocate: Supported 00:10:34.962 Deallocated/Unwritten Error: Supported 00:10:34.962 Deallocated Read Value: All 0x00 00:10:34.962 Deallocate in Write Zeroes: Not Supported 00:10:34.962 Deallocated Guard Field: 0xFFFF 00:10:34.962 Flush: Supported 00:10:34.962 Reservation: Not Supported 00:10:34.962 Namespace Sharing Capabilities: Private 00:10:34.962 Size (in LBAs): 1048576 (4GiB) 00:10:34.962 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.962 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.962 Thin Provisioning: Not Supported 00:10:34.962 Per-NS Atomic Units: No 00:10:34.962 Maximum Single Source Range Length: 128 00:10:34.962 Maximum Copy Length: 128 00:10:34.962 Maximum Source Range Count: 128 00:10:34.962 NGUID/EUI64 Never Reused: No 00:10:34.962 Namespace Write Protected: No 00:10:34.962 Number of LBA Formats: 8 00:10:34.962 Current LBA Format: LBA Format #04 00:10:34.962 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.962 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.962 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.962 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.962 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.962 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.962 00:10:34.962 NVM Specific Namespace Data 00:10:34.962 =========================== 00:10:34.962 Logical Block Storage Tag Mask: 0 00:10:34.962 Protection Information Capabilities: 00:10:34.962 16b Guard Protection Information Storage Tag Support: No 00:10:34.962 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.962 Storage Tag Check Read Support: No 00:10:34.962 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Namespace ID:3 00:10:34.962 Error Recovery Timeout: Unlimited 00:10:34.962 Command Set Identifier: NVM (00h) 00:10:34.962 Deallocate: Supported 00:10:34.962 Deallocated/Unwritten Error: Supported 00:10:34.962 Deallocated Read Value: All 0x00 00:10:34.962 Deallocate in Write Zeroes: Not Supported 00:10:34.962 Deallocated Guard Field: 0xFFFF 00:10:34.962 Flush: Supported 00:10:34.962 Reservation: Not Supported 00:10:34.962 Namespace Sharing Capabilities: Private 00:10:34.962 Size (in LBAs): 1048576 (4GiB) 00:10:34.962 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.962 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.962 Thin Provisioning: Not Supported 00:10:34.962 Per-NS Atomic Units: No 00:10:34.962 Maximum Single Source Range Length: 128 00:10:34.962 Maximum Copy Length: 128 00:10:34.962 Maximum Source Range Count: 128 00:10:34.962 NGUID/EUI64 Never Reused: No 00:10:34.962 Namespace Write Protected: No 00:10:34.962 Number of LBA Formats: 8 00:10:34.962 Current LBA Format: LBA Format #04 00:10:34.962 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.962 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.962 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.962 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.962 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.962 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.962 00:10:34.962 NVM Specific Namespace Data 00:10:34.962 =========================== 00:10:34.962 Logical Block Storage Tag Mask: 0 00:10:34.962 Protection Information Capabilities: 00:10:34.962 16b Guard Protection Information Storage Tag Support: No 00:10:34.962 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.962 Storage Tag Check Read Support: No 00:10:34.962 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.962 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:34.962 14:15:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:35.222 ===================================================== 00:10:35.222 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.222 ===================================================== 00:10:35.222 Controller Capabilities/Features 00:10:35.222 ================================ 00:10:35.222 Vendor ID: 1b36 00:10:35.222 Subsystem Vendor ID: 1af4 00:10:35.222 Serial Number: 12340 00:10:35.222 Model Number: QEMU NVMe Ctrl 00:10:35.222 Firmware Version: 8.0.0 00:10:35.222 Recommended Arb Burst: 6 00:10:35.222 IEEE OUI Identifier: 00 54 52 00:10:35.222 Multi-path I/O 00:10:35.222 May have multiple subsystem ports: No 00:10:35.222 May have multiple controllers: No 00:10:35.222 Associated with SR-IOV VF: No 00:10:35.222 Max Data Transfer Size: 524288 00:10:35.222 Max Number of Namespaces: 256 00:10:35.222 Max Number of I/O Queues: 64 00:10:35.222 NVMe Specification Version (VS): 1.4 00:10:35.222 NVMe Specification Version (Identify): 1.4 00:10:35.222 Maximum Queue Entries: 2048 00:10:35.222 Contiguous Queues Required: Yes 00:10:35.222 Arbitration Mechanisms Supported 00:10:35.222 Weighted Round Robin: Not Supported 00:10:35.222 Vendor Specific: Not Supported 00:10:35.222 Reset Timeout: 7500 ms 00:10:35.222 Doorbell Stride: 4 bytes 00:10:35.222 NVM Subsystem Reset: Not Supported 00:10:35.222 Command Sets Supported 00:10:35.222 NVM Command Set: Supported 00:10:35.222 Boot Partition: Not Supported 00:10:35.222 Memory Page Size Minimum: 4096 bytes 00:10:35.222 Memory Page Size Maximum: 65536 bytes 00:10:35.222 Persistent Memory Region: Not Supported 00:10:35.222 Optional Asynchronous Events Supported 00:10:35.222 Namespace Attribute Notices: Supported 00:10:35.222 Firmware Activation Notices: Not Supported 00:10:35.222 ANA Change Notices: Not Supported 00:10:35.222 PLE Aggregate Log Change Notices: Not Supported 00:10:35.222 LBA Status Info Alert Notices: Not Supported 00:10:35.222 EGE Aggregate Log Change Notices: Not Supported 00:10:35.222 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.222 Zone Descriptor Change Notices: Not Supported 00:10:35.222 Discovery Log Change Notices: Not Supported 00:10:35.222 Controller Attributes 00:10:35.222 128-bit Host Identifier: Not Supported 00:10:35.222 Non-Operational Permissive Mode: Not Supported 00:10:35.222 NVM Sets: Not Supported 00:10:35.222 Read Recovery Levels: Not Supported 00:10:35.222 Endurance Groups: Not Supported 00:10:35.222 Predictable Latency Mode: Not Supported 00:10:35.222 Traffic Based Keep ALive: Not Supported 00:10:35.222 Namespace Granularity: Not Supported 00:10:35.222 SQ Associations: Not Supported 00:10:35.222 UUID List: Not Supported 00:10:35.222 Multi-Domain Subsystem: Not Supported 00:10:35.222 Fixed Capacity Management: Not Supported 00:10:35.222 Variable Capacity Management: Not Supported 00:10:35.222 Delete Endurance Group: Not Supported 00:10:35.222 Delete NVM Set: Not Supported 00:10:35.222 Extended LBA Formats Supported: Supported 00:10:35.222 Flexible Data Placement Supported: Not Supported 00:10:35.222 00:10:35.222 Controller Memory Buffer Support 00:10:35.222 ================================ 00:10:35.222 Supported: No 00:10:35.222 00:10:35.222 Persistent Memory Region Support 00:10:35.222 ================================ 00:10:35.222 Supported: No 00:10:35.222 00:10:35.222 Admin Command Set Attributes 00:10:35.222 ============================ 00:10:35.222 Security Send/Receive: Not Supported 00:10:35.222 Format NVM: Supported 00:10:35.222 Firmware Activate/Download: Not Supported 00:10:35.222 Namespace Management: Supported 00:10:35.222 Device Self-Test: Not Supported 00:10:35.222 Directives: Supported 00:10:35.222 NVMe-MI: Not Supported 00:10:35.222 Virtualization Management: Not Supported 00:10:35.222 Doorbell Buffer Config: Supported 00:10:35.222 Get LBA Status Capability: Not Supported 00:10:35.222 Command & Feature Lockdown Capability: Not Supported 00:10:35.222 Abort Command Limit: 4 00:10:35.222 Async Event Request Limit: 4 00:10:35.222 Number of Firmware Slots: N/A 00:10:35.222 Firmware Slot 1 Read-Only: N/A 00:10:35.222 Firmware Activation Without Reset: N/A 00:10:35.222 Multiple Update Detection Support: N/A 00:10:35.222 Firmware Update Granularity: No Information Provided 00:10:35.222 Per-Namespace SMART Log: Yes 00:10:35.222 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.222 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:35.222 Command Effects Log Page: Supported 00:10:35.223 Get Log Page Extended Data: Supported 00:10:35.223 Telemetry Log Pages: Not Supported 00:10:35.223 Persistent Event Log Pages: Not Supported 00:10:35.223 Supported Log Pages Log Page: May Support 00:10:35.223 Commands Supported & Effects Log Page: Not Supported 00:10:35.223 Feature Identifiers & Effects Log Page:May Support 00:10:35.223 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.223 Data Area 4 for Telemetry Log: Not Supported 00:10:35.223 Error Log Page Entries Supported: 1 00:10:35.223 Keep Alive: Not Supported 00:10:35.223 00:10:35.223 NVM Command Set Attributes 00:10:35.223 ========================== 00:10:35.223 Submission Queue Entry Size 00:10:35.223 Max: 64 00:10:35.223 Min: 64 00:10:35.223 Completion Queue Entry Size 00:10:35.223 Max: 16 00:10:35.223 Min: 16 00:10:35.223 Number of Namespaces: 256 00:10:35.223 Compare Command: Supported 00:10:35.223 Write Uncorrectable Command: Not Supported 00:10:35.223 Dataset Management Command: Supported 00:10:35.223 Write Zeroes Command: Supported 00:10:35.223 Set Features Save Field: Supported 00:10:35.223 Reservations: Not Supported 00:10:35.223 Timestamp: Supported 00:10:35.223 Copy: Supported 00:10:35.223 Volatile Write Cache: Present 00:10:35.223 Atomic Write Unit (Normal): 1 00:10:35.223 Atomic Write Unit (PFail): 1 00:10:35.223 Atomic Compare & Write Unit: 1 00:10:35.223 Fused Compare & Write: Not Supported 00:10:35.223 Scatter-Gather List 00:10:35.223 SGL Command Set: Supported 00:10:35.223 SGL Keyed: Not Supported 00:10:35.223 SGL Bit Bucket Descriptor: Not Supported 00:10:35.223 SGL Metadata Pointer: Not Supported 00:10:35.223 Oversized SGL: Not Supported 00:10:35.223 SGL Metadata Address: Not Supported 00:10:35.223 SGL Offset: Not Supported 00:10:35.223 Transport SGL Data Block: Not Supported 00:10:35.223 Replay Protected Memory Block: Not Supported 00:10:35.223 00:10:35.223 Firmware Slot Information 00:10:35.223 ========================= 00:10:35.223 Active slot: 1 00:10:35.223 Slot 1 Firmware Revision: 1.0 00:10:35.223 00:10:35.223 00:10:35.223 Commands Supported and Effects 00:10:35.223 ============================== 00:10:35.223 Admin Commands 00:10:35.223 -------------- 00:10:35.223 Delete I/O Submission Queue (00h): Supported 00:10:35.223 Create I/O Submission Queue (01h): Supported 00:10:35.223 Get Log Page (02h): Supported 00:10:35.223 Delete I/O Completion Queue (04h): Supported 00:10:35.223 Create I/O Completion Queue (05h): Supported 00:10:35.223 Identify (06h): Supported 00:10:35.223 Abort (08h): Supported 00:10:35.223 Set Features (09h): Supported 00:10:35.223 Get Features (0Ah): Supported 00:10:35.223 Asynchronous Event Request (0Ch): Supported 00:10:35.223 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:35.223 Directive Send (19h): Supported 00:10:35.223 Directive Receive (1Ah): Supported 00:10:35.223 Virtualization Management (1Ch): Supported 00:10:35.223 Doorbell Buffer Config (7Ch): Supported 00:10:35.223 Format NVM (80h): Supported LBA-Change 00:10:35.223 I/O Commands 00:10:35.223 ------------ 00:10:35.223 Flush (00h): Supported LBA-Change 00:10:35.223 Write (01h): Supported LBA-Change 00:10:35.223 Read (02h): Supported 00:10:35.223 Compare (05h): Supported 00:10:35.223 Write Zeroes (08h): Supported LBA-Change 00:10:35.223 Dataset Management (09h): Supported LBA-Change 00:10:35.223 Unknown (0Ch): Supported 00:10:35.223 Unknown (12h): Supported 00:10:35.223 Copy (19h): Supported LBA-Change 00:10:35.223 Unknown (1Dh): Supported LBA-Change 00:10:35.223 00:10:35.223 Error Log 00:10:35.223 ========= 00:10:35.223 00:10:35.223 Arbitration 00:10:35.223 =========== 00:10:35.223 Arbitration Burst: no limit 00:10:35.223 00:10:35.223 Power Management 00:10:35.223 ================ 00:10:35.223 Number of Power States: 1 00:10:35.223 Current Power State: Power State #0 00:10:35.223 Power State #0: 00:10:35.223 Max Power: 25.00 W 00:10:35.223 Non-Operational State: Operational 00:10:35.223 Entry Latency: 16 microseconds 00:10:35.223 Exit Latency: 4 microseconds 00:10:35.223 Relative Read Throughput: 0 00:10:35.223 Relative Read Latency: 0 00:10:35.223 Relative Write Throughput: 0 00:10:35.223 Relative Write Latency: 0 00:10:35.223 Idle Power: Not Reported 00:10:35.223 Active Power: Not Reported 00:10:35.223 Non-Operational Permissive Mode: Not Supported 00:10:35.223 00:10:35.223 Health Information 00:10:35.223 ================== 00:10:35.223 Critical Warnings: 00:10:35.223 Available Spare Space: OK 00:10:35.223 Temperature: OK 00:10:35.223 Device Reliability: OK 00:10:35.223 Read Only: No 00:10:35.223 Volatile Memory Backup: OK 00:10:35.223 Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.223 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:35.223 Available Spare: 0% 00:10:35.223 Available Spare Threshold: 0% 00:10:35.223 Life Percentage Used: 0% 00:10:35.223 Data Units Read: 750 00:10:35.223 Data Units Written: 679 00:10:35.223 Host Read Commands: 32942 00:10:35.223 Host Write Commands: 32728 00:10:35.223 Controller Busy Time: 0 minutes 00:10:35.223 Power Cycles: 0 00:10:35.223 Power On Hours: 0 hours 00:10:35.223 Unsafe Shutdowns: 0 00:10:35.223 Unrecoverable Media Errors: 0 00:10:35.223 Lifetime Error Log Entries: 0 00:10:35.223 Warning Temperature Time: 0 minutes 00:10:35.223 Critical Temperature Time: 0 minutes 00:10:35.223 00:10:35.223 Number of Queues 00:10:35.223 ================ 00:10:35.223 Number of I/O Submission Queues: 64 00:10:35.223 Number of I/O Completion Queues: 64 00:10:35.223 00:10:35.223 ZNS Specific Controller Data 00:10:35.223 ============================ 00:10:35.223 Zone Append Size Limit: 0 00:10:35.223 00:10:35.223 00:10:35.223 Active Namespaces 00:10:35.223 ================= 00:10:35.223 Namespace ID:1 00:10:35.223 Error Recovery Timeout: Unlimited 00:10:35.223 Command Set Identifier: NVM (00h) 00:10:35.223 Deallocate: Supported 00:10:35.223 Deallocated/Unwritten Error: Supported 00:10:35.223 Deallocated Read Value: All 0x00 00:10:35.223 Deallocate in Write Zeroes: Not Supported 00:10:35.223 Deallocated Guard Field: 0xFFFF 00:10:35.223 Flush: Supported 00:10:35.223 Reservation: Not Supported 00:10:35.223 Metadata Transferred as: Separate Metadata Buffer 00:10:35.223 Namespace Sharing Capabilities: Private 00:10:35.223 Size (in LBAs): 1548666 (5GiB) 00:10:35.223 Capacity (in LBAs): 1548666 (5GiB) 00:10:35.223 Utilization (in LBAs): 1548666 (5GiB) 00:10:35.223 Thin Provisioning: Not Supported 00:10:35.223 Per-NS Atomic Units: No 00:10:35.223 Maximum Single Source Range Length: 128 00:10:35.223 Maximum Copy Length: 128 00:10:35.223 Maximum Source Range Count: 128 00:10:35.223 NGUID/EUI64 Never Reused: No 00:10:35.223 Namespace Write Protected: No 00:10:35.223 Number of LBA Formats: 8 00:10:35.223 Current LBA Format: LBA Format #07 00:10:35.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.223 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.223 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.223 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.223 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.223 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.223 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.223 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.223 00:10:35.223 NVM Specific Namespace Data 00:10:35.223 =========================== 00:10:35.223 Logical Block Storage Tag Mask: 0 00:10:35.223 Protection Information Capabilities: 00:10:35.223 16b Guard Protection Information Storage Tag Support: No 00:10:35.223 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.223 Storage Tag Check Read Support: No 00:10:35.223 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.223 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:35.223 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:35.792 ===================================================== 00:10:35.792 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.792 ===================================================== 00:10:35.792 Controller Capabilities/Features 00:10:35.792 ================================ 00:10:35.792 Vendor ID: 1b36 00:10:35.792 Subsystem Vendor ID: 1af4 00:10:35.792 Serial Number: 12341 00:10:35.792 Model Number: QEMU NVMe Ctrl 00:10:35.792 Firmware Version: 8.0.0 00:10:35.792 Recommended Arb Burst: 6 00:10:35.792 IEEE OUI Identifier: 00 54 52 00:10:35.792 Multi-path I/O 00:10:35.792 May have multiple subsystem ports: No 00:10:35.792 May have multiple controllers: No 00:10:35.792 Associated with SR-IOV VF: No 00:10:35.792 Max Data Transfer Size: 524288 00:10:35.792 Max Number of Namespaces: 256 00:10:35.792 Max Number of I/O Queues: 64 00:10:35.792 NVMe Specification Version (VS): 1.4 00:10:35.792 NVMe Specification Version (Identify): 1.4 00:10:35.792 Maximum Queue Entries: 2048 00:10:35.792 Contiguous Queues Required: Yes 00:10:35.792 Arbitration Mechanisms Supported 00:10:35.792 Weighted Round Robin: Not Supported 00:10:35.792 Vendor Specific: Not Supported 00:10:35.792 Reset Timeout: 7500 ms 00:10:35.792 Doorbell Stride: 4 bytes 00:10:35.792 NVM Subsystem Reset: Not Supported 00:10:35.792 Command Sets Supported 00:10:35.792 NVM Command Set: Supported 00:10:35.792 Boot Partition: Not Supported 00:10:35.792 Memory Page Size Minimum: 4096 bytes 00:10:35.792 Memory Page Size Maximum: 65536 bytes 00:10:35.792 Persistent Memory Region: Not Supported 00:10:35.792 Optional Asynchronous Events Supported 00:10:35.792 Namespace Attribute Notices: Supported 00:10:35.792 Firmware Activation Notices: Not Supported 00:10:35.792 ANA Change Notices: Not Supported 00:10:35.792 PLE Aggregate Log Change Notices: Not Supported 00:10:35.792 LBA Status Info Alert Notices: Not Supported 00:10:35.792 EGE Aggregate Log Change Notices: Not Supported 00:10:35.792 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.792 Zone Descriptor Change Notices: Not Supported 00:10:35.792 Discovery Log Change Notices: Not Supported 00:10:35.792 Controller Attributes 00:10:35.792 128-bit Host Identifier: Not Supported 00:10:35.792 Non-Operational Permissive Mode: Not Supported 00:10:35.792 NVM Sets: Not Supported 00:10:35.792 Read Recovery Levels: Not Supported 00:10:35.792 Endurance Groups: Not Supported 00:10:35.792 Predictable Latency Mode: Not Supported 00:10:35.792 Traffic Based Keep ALive: Not Supported 00:10:35.792 Namespace Granularity: Not Supported 00:10:35.792 SQ Associations: Not Supported 00:10:35.792 UUID List: Not Supported 00:10:35.792 Multi-Domain Subsystem: Not Supported 00:10:35.792 Fixed Capacity Management: Not Supported 00:10:35.792 Variable Capacity Management: Not Supported 00:10:35.792 Delete Endurance Group: Not Supported 00:10:35.792 Delete NVM Set: Not Supported 00:10:35.792 Extended LBA Formats Supported: Supported 00:10:35.792 Flexible Data Placement Supported: Not Supported 00:10:35.792 00:10:35.792 Controller Memory Buffer Support 00:10:35.792 ================================ 00:10:35.792 Supported: No 00:10:35.792 00:10:35.792 Persistent Memory Region Support 00:10:35.792 ================================ 00:10:35.792 Supported: No 00:10:35.792 00:10:35.792 Admin Command Set Attributes 00:10:35.792 ============================ 00:10:35.792 Security Send/Receive: Not Supported 00:10:35.792 Format NVM: Supported 00:10:35.792 Firmware Activate/Download: Not Supported 00:10:35.792 Namespace Management: Supported 00:10:35.792 Device Self-Test: Not Supported 00:10:35.792 Directives: Supported 00:10:35.792 NVMe-MI: Not Supported 00:10:35.792 Virtualization Management: Not Supported 00:10:35.792 Doorbell Buffer Config: Supported 00:10:35.792 Get LBA Status Capability: Not Supported 00:10:35.792 Command & Feature Lockdown Capability: Not Supported 00:10:35.792 Abort Command Limit: 4 00:10:35.792 Async Event Request Limit: 4 00:10:35.792 Number of Firmware Slots: N/A 00:10:35.792 Firmware Slot 1 Read-Only: N/A 00:10:35.792 Firmware Activation Without Reset: N/A 00:10:35.792 Multiple Update Detection Support: N/A 00:10:35.792 Firmware Update Granularity: No Information Provided 00:10:35.792 Per-Namespace SMART Log: Yes 00:10:35.793 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.793 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:35.793 Command Effects Log Page: Supported 00:10:35.793 Get Log Page Extended Data: Supported 00:10:35.793 Telemetry Log Pages: Not Supported 00:10:35.793 Persistent Event Log Pages: Not Supported 00:10:35.793 Supported Log Pages Log Page: May Support 00:10:35.793 Commands Supported & Effects Log Page: Not Supported 00:10:35.793 Feature Identifiers & Effects Log Page:May Support 00:10:35.793 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.793 Data Area 4 for Telemetry Log: Not Supported 00:10:35.793 Error Log Page Entries Supported: 1 00:10:35.793 Keep Alive: Not Supported 00:10:35.793 00:10:35.793 NVM Command Set Attributes 00:10:35.793 ========================== 00:10:35.793 Submission Queue Entry Size 00:10:35.793 Max: 64 00:10:35.793 Min: 64 00:10:35.793 Completion Queue Entry Size 00:10:35.793 Max: 16 00:10:35.793 Min: 16 00:10:35.793 Number of Namespaces: 256 00:10:35.793 Compare Command: Supported 00:10:35.793 Write Uncorrectable Command: Not Supported 00:10:35.793 Dataset Management Command: Supported 00:10:35.793 Write Zeroes Command: Supported 00:10:35.793 Set Features Save Field: Supported 00:10:35.793 Reservations: Not Supported 00:10:35.793 Timestamp: Supported 00:10:35.793 Copy: Supported 00:10:35.793 Volatile Write Cache: Present 00:10:35.793 Atomic Write Unit (Normal): 1 00:10:35.793 Atomic Write Unit (PFail): 1 00:10:35.793 Atomic Compare & Write Unit: 1 00:10:35.793 Fused Compare & Write: Not Supported 00:10:35.793 Scatter-Gather List 00:10:35.793 SGL Command Set: Supported 00:10:35.793 SGL Keyed: Not Supported 00:10:35.793 SGL Bit Bucket Descriptor: Not Supported 00:10:35.793 SGL Metadata Pointer: Not Supported 00:10:35.793 Oversized SGL: Not Supported 00:10:35.793 SGL Metadata Address: Not Supported 00:10:35.793 SGL Offset: Not Supported 00:10:35.793 Transport SGL Data Block: Not Supported 00:10:35.793 Replay Protected Memory Block: Not Supported 00:10:35.793 00:10:35.793 Firmware Slot Information 00:10:35.793 ========================= 00:10:35.793 Active slot: 1 00:10:35.793 Slot 1 Firmware Revision: 1.0 00:10:35.793 00:10:35.793 00:10:35.793 Commands Supported and Effects 00:10:35.793 ============================== 00:10:35.793 Admin Commands 00:10:35.793 -------------- 00:10:35.793 Delete I/O Submission Queue (00h): Supported 00:10:35.793 Create I/O Submission Queue (01h): Supported 00:10:35.793 Get Log Page (02h): Supported 00:10:35.793 Delete I/O Completion Queue (04h): Supported 00:10:35.793 Create I/O Completion Queue (05h): Supported 00:10:35.793 Identify (06h): Supported 00:10:35.793 Abort (08h): Supported 00:10:35.793 Set Features (09h): Supported 00:10:35.793 Get Features (0Ah): Supported 00:10:35.793 Asynchronous Event Request (0Ch): Supported 00:10:35.793 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:35.793 Directive Send (19h): Supported 00:10:35.793 Directive Receive (1Ah): Supported 00:10:35.793 Virtualization Management (1Ch): Supported 00:10:35.793 Doorbell Buffer Config (7Ch): Supported 00:10:35.793 Format NVM (80h): Supported LBA-Change 00:10:35.793 I/O Commands 00:10:35.793 ------------ 00:10:35.793 Flush (00h): Supported LBA-Change 00:10:35.793 Write (01h): Supported LBA-Change 00:10:35.793 Read (02h): Supported 00:10:35.793 Compare (05h): Supported 00:10:35.793 Write Zeroes (08h): Supported LBA-Change 00:10:35.793 Dataset Management (09h): Supported LBA-Change 00:10:35.793 Unknown (0Ch): Supported 00:10:35.793 Unknown (12h): Supported 00:10:35.793 Copy (19h): Supported LBA-Change 00:10:35.793 Unknown (1Dh): Supported LBA-Change 00:10:35.793 00:10:35.793 Error Log 00:10:35.793 ========= 00:10:35.793 00:10:35.793 Arbitration 00:10:35.793 =========== 00:10:35.793 Arbitration Burst: no limit 00:10:35.793 00:10:35.793 Power Management 00:10:35.793 ================ 00:10:35.793 Number of Power States: 1 00:10:35.793 Current Power State: Power State #0 00:10:35.793 Power State #0: 00:10:35.793 Max Power: 25.00 W 00:10:35.793 Non-Operational State: Operational 00:10:35.793 Entry Latency: 16 microseconds 00:10:35.793 Exit Latency: 4 microseconds 00:10:35.793 Relative Read Throughput: 0 00:10:35.793 Relative Read Latency: 0 00:10:35.793 Relative Write Throughput: 0 00:10:35.793 Relative Write Latency: 0 00:10:35.793 Idle Power: Not Reported 00:10:35.793 Active Power: Not Reported 00:10:35.793 Non-Operational Permissive Mode: Not Supported 00:10:35.793 00:10:35.793 Health Information 00:10:35.793 ================== 00:10:35.793 Critical Warnings: 00:10:35.793 Available Spare Space: OK 00:10:35.793 Temperature: OK 00:10:35.793 Device Reliability: OK 00:10:35.793 Read Only: No 00:10:35.793 Volatile Memory Backup: OK 00:10:35.793 Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.793 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:35.793 Available Spare: 0% 00:10:35.793 Available Spare Threshold: 0% 00:10:35.793 Life Percentage Used: 0% 00:10:35.793 Data Units Read: 1161 00:10:35.793 Data Units Written: 1028 00:10:35.793 Host Read Commands: 49687 00:10:35.793 Host Write Commands: 48482 00:10:35.793 Controller Busy Time: 0 minutes 00:10:35.793 Power Cycles: 0 00:10:35.793 Power On Hours: 0 hours 00:10:35.793 Unsafe Shutdowns: 0 00:10:35.793 Unrecoverable Media Errors: 0 00:10:35.793 Lifetime Error Log Entries: 0 00:10:35.793 Warning Temperature Time: 0 minutes 00:10:35.793 Critical Temperature Time: 0 minutes 00:10:35.793 00:10:35.793 Number of Queues 00:10:35.793 ================ 00:10:35.793 Number of I/O Submission Queues: 64 00:10:35.793 Number of I/O Completion Queues: 64 00:10:35.793 00:10:35.793 ZNS Specific Controller Data 00:10:35.793 ============================ 00:10:35.793 Zone Append Size Limit: 0 00:10:35.793 00:10:35.793 00:10:35.793 Active Namespaces 00:10:35.793 ================= 00:10:35.793 Namespace ID:1 00:10:35.793 Error Recovery Timeout: Unlimited 00:10:35.793 Command Set Identifier: NVM (00h) 00:10:35.793 Deallocate: Supported 00:10:35.793 Deallocated/Unwritten Error: Supported 00:10:35.793 Deallocated Read Value: All 0x00 00:10:35.793 Deallocate in Write Zeroes: Not Supported 00:10:35.793 Deallocated Guard Field: 0xFFFF 00:10:35.793 Flush: Supported 00:10:35.793 Reservation: Not Supported 00:10:35.793 Namespace Sharing Capabilities: Private 00:10:35.793 Size (in LBAs): 1310720 (5GiB) 00:10:35.793 Capacity (in LBAs): 1310720 (5GiB) 00:10:35.793 Utilization (in LBAs): 1310720 (5GiB) 00:10:35.793 Thin Provisioning: Not Supported 00:10:35.793 Per-NS Atomic Units: No 00:10:35.793 Maximum Single Source Range Length: 128 00:10:35.793 Maximum Copy Length: 128 00:10:35.793 Maximum Source Range Count: 128 00:10:35.793 NGUID/EUI64 Never Reused: No 00:10:35.793 Namespace Write Protected: No 00:10:35.793 Number of LBA Formats: 8 00:10:35.793 Current LBA Format: LBA Format #04 00:10:35.793 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.793 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.793 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.793 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.793 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.793 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.793 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.793 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.793 00:10:35.793 NVM Specific Namespace Data 00:10:35.793 =========================== 00:10:35.793 Logical Block Storage Tag Mask: 0 00:10:35.793 Protection Information Capabilities: 00:10:35.793 16b Guard Protection Information Storage Tag Support: No 00:10:35.793 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.793 Storage Tag Check Read Support: No 00:10:35.793 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.793 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:35.793 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:36.053 ===================================================== 00:10:36.053 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:36.053 ===================================================== 00:10:36.053 Controller Capabilities/Features 00:10:36.053 ================================ 00:10:36.053 Vendor ID: 1b36 00:10:36.053 Subsystem Vendor ID: 1af4 00:10:36.053 Serial Number: 12342 00:10:36.053 Model Number: QEMU NVMe Ctrl 00:10:36.053 Firmware Version: 8.0.0 00:10:36.053 Recommended Arb Burst: 6 00:10:36.053 IEEE OUI Identifier: 00 54 52 00:10:36.053 Multi-path I/O 00:10:36.053 May have multiple subsystem ports: No 00:10:36.053 May have multiple controllers: No 00:10:36.053 Associated with SR-IOV VF: No 00:10:36.053 Max Data Transfer Size: 524288 00:10:36.053 Max Number of Namespaces: 256 00:10:36.053 Max Number of I/O Queues: 64 00:10:36.053 NVMe Specification Version (VS): 1.4 00:10:36.053 NVMe Specification Version (Identify): 1.4 00:10:36.053 Maximum Queue Entries: 2048 00:10:36.053 Contiguous Queues Required: Yes 00:10:36.053 Arbitration Mechanisms Supported 00:10:36.053 Weighted Round Robin: Not Supported 00:10:36.053 Vendor Specific: Not Supported 00:10:36.053 Reset Timeout: 7500 ms 00:10:36.053 Doorbell Stride: 4 bytes 00:10:36.054 NVM Subsystem Reset: Not Supported 00:10:36.054 Command Sets Supported 00:10:36.054 NVM Command Set: Supported 00:10:36.054 Boot Partition: Not Supported 00:10:36.054 Memory Page Size Minimum: 4096 bytes 00:10:36.054 Memory Page Size Maximum: 65536 bytes 00:10:36.054 Persistent Memory Region: Not Supported 00:10:36.054 Optional Asynchronous Events Supported 00:10:36.054 Namespace Attribute Notices: Supported 00:10:36.054 Firmware Activation Notices: Not Supported 00:10:36.054 ANA Change Notices: Not Supported 00:10:36.054 PLE Aggregate Log Change Notices: Not Supported 00:10:36.054 LBA Status Info Alert Notices: Not Supported 00:10:36.054 EGE Aggregate Log Change Notices: Not Supported 00:10:36.054 Normal NVM Subsystem Shutdown event: Not Supported 00:10:36.054 Zone Descriptor Change Notices: Not Supported 00:10:36.054 Discovery Log Change Notices: Not Supported 00:10:36.054 Controller Attributes 00:10:36.054 128-bit Host Identifier: Not Supported 00:10:36.054 Non-Operational Permissive Mode: Not Supported 00:10:36.054 NVM Sets: Not Supported 00:10:36.054 Read Recovery Levels: Not Supported 00:10:36.054 Endurance Groups: Not Supported 00:10:36.054 Predictable Latency Mode: Not Supported 00:10:36.054 Traffic Based Keep ALive: Not Supported 00:10:36.054 Namespace Granularity: Not Supported 00:10:36.054 SQ Associations: Not Supported 00:10:36.054 UUID List: Not Supported 00:10:36.054 Multi-Domain Subsystem: Not Supported 00:10:36.054 Fixed Capacity Management: Not Supported 00:10:36.054 Variable Capacity Management: Not Supported 00:10:36.054 Delete Endurance Group: Not Supported 00:10:36.054 Delete NVM Set: Not Supported 00:10:36.054 Extended LBA Formats Supported: Supported 00:10:36.054 Flexible Data Placement Supported: Not Supported 00:10:36.054 00:10:36.054 Controller Memory Buffer Support 00:10:36.054 ================================ 00:10:36.054 Supported: No 00:10:36.054 00:10:36.054 Persistent Memory Region Support 00:10:36.054 ================================ 00:10:36.054 Supported: No 00:10:36.054 00:10:36.054 Admin Command Set Attributes 00:10:36.054 ============================ 00:10:36.054 Security Send/Receive: Not Supported 00:10:36.054 Format NVM: Supported 00:10:36.054 Firmware Activate/Download: Not Supported 00:10:36.054 Namespace Management: Supported 00:10:36.054 Device Self-Test: Not Supported 00:10:36.054 Directives: Supported 00:10:36.054 NVMe-MI: Not Supported 00:10:36.054 Virtualization Management: Not Supported 00:10:36.054 Doorbell Buffer Config: Supported 00:10:36.054 Get LBA Status Capability: Not Supported 00:10:36.054 Command & Feature Lockdown Capability: Not Supported 00:10:36.054 Abort Command Limit: 4 00:10:36.054 Async Event Request Limit: 4 00:10:36.054 Number of Firmware Slots: N/A 00:10:36.054 Firmware Slot 1 Read-Only: N/A 00:10:36.054 Firmware Activation Without Reset: N/A 00:10:36.054 Multiple Update Detection Support: N/A 00:10:36.054 Firmware Update Granularity: No Information Provided 00:10:36.054 Per-Namespace SMART Log: Yes 00:10:36.054 Asymmetric Namespace Access Log Page: Not Supported 00:10:36.054 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:36.054 Command Effects Log Page: Supported 00:10:36.054 Get Log Page Extended Data: Supported 00:10:36.054 Telemetry Log Pages: Not Supported 00:10:36.054 Persistent Event Log Pages: Not Supported 00:10:36.054 Supported Log Pages Log Page: May Support 00:10:36.054 Commands Supported & Effects Log Page: Not Supported 00:10:36.054 Feature Identifiers & Effects Log Page:May Support 00:10:36.054 NVMe-MI Commands & Effects Log Page: May Support 00:10:36.054 Data Area 4 for Telemetry Log: Not Supported 00:10:36.054 Error Log Page Entries Supported: 1 00:10:36.054 Keep Alive: Not Supported 00:10:36.054 00:10:36.054 NVM Command Set Attributes 00:10:36.054 ========================== 00:10:36.054 Submission Queue Entry Size 00:10:36.054 Max: 64 00:10:36.054 Min: 64 00:10:36.054 Completion Queue Entry Size 00:10:36.054 Max: 16 00:10:36.054 Min: 16 00:10:36.054 Number of Namespaces: 256 00:10:36.054 Compare Command: Supported 00:10:36.054 Write Uncorrectable Command: Not Supported 00:10:36.054 Dataset Management Command: Supported 00:10:36.054 Write Zeroes Command: Supported 00:10:36.054 Set Features Save Field: Supported 00:10:36.054 Reservations: Not Supported 00:10:36.054 Timestamp: Supported 00:10:36.054 Copy: Supported 00:10:36.054 Volatile Write Cache: Present 00:10:36.054 Atomic Write Unit (Normal): 1 00:10:36.054 Atomic Write Unit (PFail): 1 00:10:36.054 Atomic Compare & Write Unit: 1 00:10:36.054 Fused Compare & Write: Not Supported 00:10:36.054 Scatter-Gather List 00:10:36.054 SGL Command Set: Supported 00:10:36.054 SGL Keyed: Not Supported 00:10:36.054 SGL Bit Bucket Descriptor: Not Supported 00:10:36.054 SGL Metadata Pointer: Not Supported 00:10:36.054 Oversized SGL: Not Supported 00:10:36.054 SGL Metadata Address: Not Supported 00:10:36.054 SGL Offset: Not Supported 00:10:36.054 Transport SGL Data Block: Not Supported 00:10:36.054 Replay Protected Memory Block: Not Supported 00:10:36.054 00:10:36.054 Firmware Slot Information 00:10:36.054 ========================= 00:10:36.054 Active slot: 1 00:10:36.054 Slot 1 Firmware Revision: 1.0 00:10:36.054 00:10:36.054 00:10:36.054 Commands Supported and Effects 00:10:36.054 ============================== 00:10:36.054 Admin Commands 00:10:36.054 -------------- 00:10:36.054 Delete I/O Submission Queue (00h): Supported 00:10:36.054 Create I/O Submission Queue (01h): Supported 00:10:36.054 Get Log Page (02h): Supported 00:10:36.054 Delete I/O Completion Queue (04h): Supported 00:10:36.054 Create I/O Completion Queue (05h): Supported 00:10:36.054 Identify (06h): Supported 00:10:36.054 Abort (08h): Supported 00:10:36.054 Set Features (09h): Supported 00:10:36.054 Get Features (0Ah): Supported 00:10:36.054 Asynchronous Event Request (0Ch): Supported 00:10:36.054 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:36.054 Directive Send (19h): Supported 00:10:36.054 Directive Receive (1Ah): Supported 00:10:36.054 Virtualization Management (1Ch): Supported 00:10:36.054 Doorbell Buffer Config (7Ch): Supported 00:10:36.054 Format NVM (80h): Supported LBA-Change 00:10:36.054 I/O Commands 00:10:36.054 ------------ 00:10:36.054 Flush (00h): Supported LBA-Change 00:10:36.054 Write (01h): Supported LBA-Change 00:10:36.054 Read (02h): Supported 00:10:36.054 Compare (05h): Supported 00:10:36.054 Write Zeroes (08h): Supported LBA-Change 00:10:36.054 Dataset Management (09h): Supported LBA-Change 00:10:36.054 Unknown (0Ch): Supported 00:10:36.054 Unknown (12h): Supported 00:10:36.055 Copy (19h): Supported LBA-Change 00:10:36.055 Unknown (1Dh): Supported LBA-Change 00:10:36.055 00:10:36.055 Error Log 00:10:36.055 ========= 00:10:36.055 00:10:36.055 Arbitration 00:10:36.055 =========== 00:10:36.055 Arbitration Burst: no limit 00:10:36.055 00:10:36.055 Power Management 00:10:36.055 ================ 00:10:36.055 Number of Power States: 1 00:10:36.055 Current Power State: Power State #0 00:10:36.055 Power State #0: 00:10:36.055 Max Power: 25.00 W 00:10:36.055 Non-Operational State: Operational 00:10:36.055 Entry Latency: 16 microseconds 00:10:36.055 Exit Latency: 4 microseconds 00:10:36.055 Relative Read Throughput: 0 00:10:36.055 Relative Read Latency: 0 00:10:36.055 Relative Write Throughput: 0 00:10:36.055 Relative Write Latency: 0 00:10:36.055 Idle Power: Not Reported 00:10:36.055 Active Power: Not Reported 00:10:36.055 Non-Operational Permissive Mode: Not Supported 00:10:36.055 00:10:36.055 Health Information 00:10:36.055 ================== 00:10:36.055 Critical Warnings: 00:10:36.055 Available Spare Space: OK 00:10:36.055 Temperature: OK 00:10:36.055 Device Reliability: OK 00:10:36.055 Read Only: No 00:10:36.055 Volatile Memory Backup: OK 00:10:36.055 Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.055 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:36.055 Available Spare: 0% 00:10:36.055 Available Spare Threshold: 0% 00:10:36.055 Life Percentage Used: 0% 00:10:36.055 Data Units Read: 2570 00:10:36.055 Data Units Written: 2357 00:10:36.055 Host Read Commands: 102158 00:10:36.055 Host Write Commands: 100427 00:10:36.055 Controller Busy Time: 0 minutes 00:10:36.055 Power Cycles: 0 00:10:36.055 Power On Hours: 0 hours 00:10:36.055 Unsafe Shutdowns: 0 00:10:36.055 Unrecoverable Media Errors: 0 00:10:36.055 Lifetime Error Log Entries: 0 00:10:36.055 Warning Temperature Time: 0 minutes 00:10:36.055 Critical Temperature Time: 0 minutes 00:10:36.055 00:10:36.055 Number of Queues 00:10:36.055 ================ 00:10:36.055 Number of I/O Submission Queues: 64 00:10:36.055 Number of I/O Completion Queues: 64 00:10:36.055 00:10:36.055 ZNS Specific Controller Data 00:10:36.055 ============================ 00:10:36.055 Zone Append Size Limit: 0 00:10:36.055 00:10:36.055 00:10:36.055 Active Namespaces 00:10:36.055 ================= 00:10:36.055 Namespace ID:1 00:10:36.055 Error Recovery Timeout: Unlimited 00:10:36.055 Command Set Identifier: NVM (00h) 00:10:36.055 Deallocate: Supported 00:10:36.055 Deallocated/Unwritten Error: Supported 00:10:36.055 Deallocated Read Value: All 0x00 00:10:36.055 Deallocate in Write Zeroes: Not Supported 00:10:36.055 Deallocated Guard Field: 0xFFFF 00:10:36.055 Flush: Supported 00:10:36.055 Reservation: Not Supported 00:10:36.055 Namespace Sharing Capabilities: Private 00:10:36.055 Size (in LBAs): 1048576 (4GiB) 00:10:36.055 Capacity (in LBAs): 1048576 (4GiB) 00:10:36.055 Utilization (in LBAs): 1048576 (4GiB) 00:10:36.055 Thin Provisioning: Not Supported 00:10:36.055 Per-NS Atomic Units: No 00:10:36.055 Maximum Single Source Range Length: 128 00:10:36.055 Maximum Copy Length: 128 00:10:36.055 Maximum Source Range Count: 128 00:10:36.055 NGUID/EUI64 Never Reused: No 00:10:36.055 Namespace Write Protected: No 00:10:36.055 Number of LBA Formats: 8 00:10:36.055 Current LBA Format: LBA Format #04 00:10:36.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:36.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:36.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:36.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:36.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:36.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:36.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:36.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:36.055 00:10:36.055 NVM Specific Namespace Data 00:10:36.055 =========================== 00:10:36.055 Logical Block Storage Tag Mask: 0 00:10:36.055 Protection Information Capabilities: 00:10:36.055 16b Guard Protection Information Storage Tag Support: No 00:10:36.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:36.055 Storage Tag Check Read Support: No 00:10:36.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Namespace ID:2 00:10:36.055 Error Recovery Timeout: Unlimited 00:10:36.055 Command Set Identifier: NVM (00h) 00:10:36.055 Deallocate: Supported 00:10:36.055 Deallocated/Unwritten Error: Supported 00:10:36.055 Deallocated Read Value: All 0x00 00:10:36.055 Deallocate in Write Zeroes: Not Supported 00:10:36.055 Deallocated Guard Field: 0xFFFF 00:10:36.055 Flush: Supported 00:10:36.055 Reservation: Not Supported 00:10:36.055 Namespace Sharing Capabilities: Private 00:10:36.055 Size (in LBAs): 1048576 (4GiB) 00:10:36.055 Capacity (in LBAs): 1048576 (4GiB) 00:10:36.055 Utilization (in LBAs): 1048576 (4GiB) 00:10:36.055 Thin Provisioning: Not Supported 00:10:36.055 Per-NS Atomic Units: No 00:10:36.055 Maximum Single Source Range Length: 128 00:10:36.055 Maximum Copy Length: 128 00:10:36.055 Maximum Source Range Count: 128 00:10:36.055 NGUID/EUI64 Never Reused: No 00:10:36.055 Namespace Write Protected: No 00:10:36.055 Number of LBA Formats: 8 00:10:36.055 Current LBA Format: LBA Format #04 00:10:36.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:36.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:36.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:36.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:36.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:36.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:36.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:36.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:36.055 00:10:36.055 NVM Specific Namespace Data 00:10:36.055 =========================== 00:10:36.055 Logical Block Storage Tag Mask: 0 00:10:36.055 Protection Information Capabilities: 00:10:36.055 16b Guard Protection Information Storage Tag Support: No 00:10:36.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:36.055 Storage Tag Check Read Support: No 00:10:36.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Namespace ID:3 00:10:36.055 Error Recovery Timeout: Unlimited 00:10:36.055 Command Set Identifier: NVM (00h) 00:10:36.055 Deallocate: Supported 00:10:36.055 Deallocated/Unwritten Error: Supported 00:10:36.055 Deallocated Read Value: All 0x00 00:10:36.055 Deallocate in Write Zeroes: Not Supported 00:10:36.055 Deallocated Guard Field: 0xFFFF 00:10:36.055 Flush: Supported 00:10:36.055 Reservation: Not Supported 00:10:36.055 Namespace Sharing Capabilities: Private 00:10:36.055 Size (in LBAs): 1048576 (4GiB) 00:10:36.055 Capacity (in LBAs): 1048576 (4GiB) 00:10:36.055 Utilization (in LBAs): 1048576 (4GiB) 00:10:36.055 Thin Provisioning: Not Supported 00:10:36.055 Per-NS Atomic Units: No 00:10:36.055 Maximum Single Source Range Length: 128 00:10:36.055 Maximum Copy Length: 128 00:10:36.055 Maximum Source Range Count: 128 00:10:36.055 NGUID/EUI64 Never Reused: No 00:10:36.055 Namespace Write Protected: No 00:10:36.055 Number of LBA Formats: 8 00:10:36.055 Current LBA Format: LBA Format #04 00:10:36.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:36.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:36.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:36.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:36.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:36.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:36.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:36.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:36.055 00:10:36.055 NVM Specific Namespace Data 00:10:36.055 =========================== 00:10:36.055 Logical Block Storage Tag Mask: 0 00:10:36.055 Protection Information Capabilities: 00:10:36.055 16b Guard Protection Information Storage Tag Support: No 00:10:36.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:36.055 Storage Tag Check Read Support: No 00:10:36.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.056 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:36.056 14:16:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:36.316 ===================================================== 00:10:36.316 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:36.316 ===================================================== 00:10:36.316 Controller Capabilities/Features 00:10:36.316 ================================ 00:10:36.316 Vendor ID: 1b36 00:10:36.316 Subsystem Vendor ID: 1af4 00:10:36.316 Serial Number: 12343 00:10:36.316 Model Number: QEMU NVMe Ctrl 00:10:36.316 Firmware Version: 8.0.0 00:10:36.316 Recommended Arb Burst: 6 00:10:36.316 IEEE OUI Identifier: 00 54 52 00:10:36.316 Multi-path I/O 00:10:36.316 May have multiple subsystem ports: No 00:10:36.316 May have multiple controllers: Yes 00:10:36.316 Associated with SR-IOV VF: No 00:10:36.316 Max Data Transfer Size: 524288 00:10:36.316 Max Number of Namespaces: 256 00:10:36.316 Max Number of I/O Queues: 64 00:10:36.316 NVMe Specification Version (VS): 1.4 00:10:36.316 NVMe Specification Version (Identify): 1.4 00:10:36.316 Maximum Queue Entries: 2048 00:10:36.316 Contiguous Queues Required: Yes 00:10:36.316 Arbitration Mechanisms Supported 00:10:36.316 Weighted Round Robin: Not Supported 00:10:36.316 Vendor Specific: Not Supported 00:10:36.316 Reset Timeout: 7500 ms 00:10:36.316 Doorbell Stride: 4 bytes 00:10:36.316 NVM Subsystem Reset: Not Supported 00:10:36.316 Command Sets Supported 00:10:36.316 NVM Command Set: Supported 00:10:36.316 Boot Partition: Not Supported 00:10:36.316 Memory Page Size Minimum: 4096 bytes 00:10:36.316 Memory Page Size Maximum: 65536 bytes 00:10:36.316 Persistent Memory Region: Not Supported 00:10:36.316 Optional Asynchronous Events Supported 00:10:36.316 Namespace Attribute Notices: Supported 00:10:36.316 Firmware Activation Notices: Not Supported 00:10:36.316 ANA Change Notices: Not Supported 00:10:36.316 PLE Aggregate Log Change Notices: Not Supported 00:10:36.316 LBA Status Info Alert Notices: Not Supported 00:10:36.316 EGE Aggregate Log Change Notices: Not Supported 00:10:36.316 Normal NVM Subsystem Shutdown event: Not Supported 00:10:36.316 Zone Descriptor Change Notices: Not Supported 00:10:36.316 Discovery Log Change Notices: Not Supported 00:10:36.316 Controller Attributes 00:10:36.316 128-bit Host Identifier: Not Supported 00:10:36.316 Non-Operational Permissive Mode: Not Supported 00:10:36.316 NVM Sets: Not Supported 00:10:36.316 Read Recovery Levels: Not Supported 00:10:36.316 Endurance Groups: Supported 00:10:36.316 Predictable Latency Mode: Not Supported 00:10:36.316 Traffic Based Keep ALive: Not Supported 00:10:36.316 Namespace Granularity: Not Supported 00:10:36.316 SQ Associations: Not Supported 00:10:36.316 UUID List: Not Supported 00:10:36.316 Multi-Domain Subsystem: Not Supported 00:10:36.316 Fixed Capacity Management: Not Supported 00:10:36.316 Variable Capacity Management: Not Supported 00:10:36.316 Delete Endurance Group: Not Supported 00:10:36.316 Delete NVM Set: Not Supported 00:10:36.316 Extended LBA Formats Supported: Supported 00:10:36.316 Flexible Data Placement Supported: Supported 00:10:36.316 00:10:36.316 Controller Memory Buffer Support 00:10:36.316 ================================ 00:10:36.316 Supported: No 00:10:36.316 00:10:36.316 Persistent Memory Region Support 00:10:36.316 ================================ 00:10:36.316 Supported: No 00:10:36.316 00:10:36.316 Admin Command Set Attributes 00:10:36.316 ============================ 00:10:36.316 Security Send/Receive: Not Supported 00:10:36.316 Format NVM: Supported 00:10:36.316 Firmware Activate/Download: Not Supported 00:10:36.316 Namespace Management: Supported 00:10:36.316 Device Self-Test: Not Supported 00:10:36.316 Directives: Supported 00:10:36.316 NVMe-MI: Not Supported 00:10:36.316 Virtualization Management: Not Supported 00:10:36.316 Doorbell Buffer Config: Supported 00:10:36.316 Get LBA Status Capability: Not Supported 00:10:36.316 Command & Feature Lockdown Capability: Not Supported 00:10:36.316 Abort Command Limit: 4 00:10:36.316 Async Event Request Limit: 4 00:10:36.316 Number of Firmware Slots: N/A 00:10:36.316 Firmware Slot 1 Read-Only: N/A 00:10:36.316 Firmware Activation Without Reset: N/A 00:10:36.316 Multiple Update Detection Support: N/A 00:10:36.316 Firmware Update Granularity: No Information Provided 00:10:36.316 Per-Namespace SMART Log: Yes 00:10:36.316 Asymmetric Namespace Access Log Page: Not Supported 00:10:36.316 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:36.316 Command Effects Log Page: Supported 00:10:36.316 Get Log Page Extended Data: Supported 00:10:36.316 Telemetry Log Pages: Not Supported 00:10:36.316 Persistent Event Log Pages: Not Supported 00:10:36.316 Supported Log Pages Log Page: May Support 00:10:36.316 Commands Supported & Effects Log Page: Not Supported 00:10:36.316 Feature Identifiers & Effects Log Page:May Support 00:10:36.316 NVMe-MI Commands & Effects Log Page: May Support 00:10:36.316 Data Area 4 for Telemetry Log: Not Supported 00:10:36.316 Error Log Page Entries Supported: 1 00:10:36.316 Keep Alive: Not Supported 00:10:36.316 00:10:36.316 NVM Command Set Attributes 00:10:36.316 ========================== 00:10:36.316 Submission Queue Entry Size 00:10:36.316 Max: 64 00:10:36.316 Min: 64 00:10:36.316 Completion Queue Entry Size 00:10:36.316 Max: 16 00:10:36.316 Min: 16 00:10:36.316 Number of Namespaces: 256 00:10:36.316 Compare Command: Supported 00:10:36.316 Write Uncorrectable Command: Not Supported 00:10:36.316 Dataset Management Command: Supported 00:10:36.316 Write Zeroes Command: Supported 00:10:36.316 Set Features Save Field: Supported 00:10:36.316 Reservations: Not Supported 00:10:36.316 Timestamp: Supported 00:10:36.316 Copy: Supported 00:10:36.316 Volatile Write Cache: Present 00:10:36.316 Atomic Write Unit (Normal): 1 00:10:36.316 Atomic Write Unit (PFail): 1 00:10:36.316 Atomic Compare & Write Unit: 1 00:10:36.316 Fused Compare & Write: Not Supported 00:10:36.316 Scatter-Gather List 00:10:36.316 SGL Command Set: Supported 00:10:36.316 SGL Keyed: Not Supported 00:10:36.316 SGL Bit Bucket Descriptor: Not Supported 00:10:36.316 SGL Metadata Pointer: Not Supported 00:10:36.316 Oversized SGL: Not Supported 00:10:36.316 SGL Metadata Address: Not Supported 00:10:36.316 SGL Offset: Not Supported 00:10:36.316 Transport SGL Data Block: Not Supported 00:10:36.316 Replay Protected Memory Block: Not Supported 00:10:36.316 00:10:36.316 Firmware Slot Information 00:10:36.316 ========================= 00:10:36.316 Active slot: 1 00:10:36.316 Slot 1 Firmware Revision: 1.0 00:10:36.316 00:10:36.316 00:10:36.316 Commands Supported and Effects 00:10:36.316 ============================== 00:10:36.316 Admin Commands 00:10:36.316 -------------- 00:10:36.316 Delete I/O Submission Queue (00h): Supported 00:10:36.316 Create I/O Submission Queue (01h): Supported 00:10:36.316 Get Log Page (02h): Supported 00:10:36.316 Delete I/O Completion Queue (04h): Supported 00:10:36.316 Create I/O Completion Queue (05h): Supported 00:10:36.316 Identify (06h): Supported 00:10:36.316 Abort (08h): Supported 00:10:36.316 Set Features (09h): Supported 00:10:36.316 Get Features (0Ah): Supported 00:10:36.316 Asynchronous Event Request (0Ch): Supported 00:10:36.316 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:36.316 Directive Send (19h): Supported 00:10:36.316 Directive Receive (1Ah): Supported 00:10:36.316 Virtualization Management (1Ch): Supported 00:10:36.316 Doorbell Buffer Config (7Ch): Supported 00:10:36.316 Format NVM (80h): Supported LBA-Change 00:10:36.316 I/O Commands 00:10:36.316 ------------ 00:10:36.317 Flush (00h): Supported LBA-Change 00:10:36.317 Write (01h): Supported LBA-Change 00:10:36.317 Read (02h): Supported 00:10:36.317 Compare (05h): Supported 00:10:36.317 Write Zeroes (08h): Supported LBA-Change 00:10:36.317 Dataset Management (09h): Supported LBA-Change 00:10:36.317 Unknown (0Ch): Supported 00:10:36.317 Unknown (12h): Supported 00:10:36.317 Copy (19h): Supported LBA-Change 00:10:36.317 Unknown (1Dh): Supported LBA-Change 00:10:36.317 00:10:36.317 Error Log 00:10:36.317 ========= 00:10:36.317 00:10:36.317 Arbitration 00:10:36.317 =========== 00:10:36.317 Arbitration Burst: no limit 00:10:36.317 00:10:36.317 Power Management 00:10:36.317 ================ 00:10:36.317 Number of Power States: 1 00:10:36.317 Current Power State: Power State #0 00:10:36.317 Power State #0: 00:10:36.317 Max Power: 25.00 W 00:10:36.317 Non-Operational State: Operational 00:10:36.317 Entry Latency: 16 microseconds 00:10:36.317 Exit Latency: 4 microseconds 00:10:36.317 Relative Read Throughput: 0 00:10:36.317 Relative Read Latency: 0 00:10:36.317 Relative Write Throughput: 0 00:10:36.317 Relative Write Latency: 0 00:10:36.317 Idle Power: Not Reported 00:10:36.317 Active Power: Not Reported 00:10:36.317 Non-Operational Permissive Mode: Not Supported 00:10:36.317 00:10:36.317 Health Information 00:10:36.317 ================== 00:10:36.317 Critical Warnings: 00:10:36.317 Available Spare Space: OK 00:10:36.317 Temperature: OK 00:10:36.317 Device Reliability: OK 00:10:36.317 Read Only: No 00:10:36.317 Volatile Memory Backup: OK 00:10:36.317 Current Temperature: 323 Kelvin (50 Celsius) 00:10:36.317 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:36.317 Available Spare: 0% 00:10:36.317 Available Spare Threshold: 0% 00:10:36.317 Life Percentage Used: 0% 00:10:36.317 Data Units Read: 1095 00:10:36.317 Data Units Written: 1024 00:10:36.317 Host Read Commands: 35932 00:10:36.317 Host Write Commands: 35355 00:10:36.317 Controller Busy Time: 0 minutes 00:10:36.317 Power Cycles: 0 00:10:36.317 Power On Hours: 0 hours 00:10:36.317 Unsafe Shutdowns: 0 00:10:36.317 Unrecoverable Media Errors: 0 00:10:36.317 Lifetime Error Log Entries: 0 00:10:36.317 Warning Temperature Time: 0 minutes 00:10:36.317 Critical Temperature Time: 0 minutes 00:10:36.317 00:10:36.317 Number of Queues 00:10:36.317 ================ 00:10:36.317 Number of I/O Submission Queues: 64 00:10:36.317 Number of I/O Completion Queues: 64 00:10:36.317 00:10:36.317 ZNS Specific Controller Data 00:10:36.317 ============================ 00:10:36.317 Zone Append Size Limit: 0 00:10:36.317 00:10:36.317 00:10:36.317 Active Namespaces 00:10:36.317 ================= 00:10:36.317 Namespace ID:1 00:10:36.317 Error Recovery Timeout: Unlimited 00:10:36.317 Command Set Identifier: NVM (00h) 00:10:36.317 Deallocate: Supported 00:10:36.317 Deallocated/Unwritten Error: Supported 00:10:36.317 Deallocated Read Value: All 0x00 00:10:36.317 Deallocate in Write Zeroes: Not Supported 00:10:36.317 Deallocated Guard Field: 0xFFFF 00:10:36.317 Flush: Supported 00:10:36.317 Reservation: Not Supported 00:10:36.317 Namespace Sharing Capabilities: Multiple Controllers 00:10:36.317 Size (in LBAs): 262144 (1GiB) 00:10:36.317 Capacity (in LBAs): 262144 (1GiB) 00:10:36.317 Utilization (in LBAs): 262144 (1GiB) 00:10:36.317 Thin Provisioning: Not Supported 00:10:36.317 Per-NS Atomic Units: No 00:10:36.317 Maximum Single Source Range Length: 128 00:10:36.317 Maximum Copy Length: 128 00:10:36.317 Maximum Source Range Count: 128 00:10:36.317 NGUID/EUI64 Never Reused: No 00:10:36.317 Namespace Write Protected: No 00:10:36.317 Endurance group ID: 1 00:10:36.317 Number of LBA Formats: 8 00:10:36.317 Current LBA Format: LBA Format #04 00:10:36.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:36.317 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:36.317 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:36.317 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:36.317 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:36.317 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:36.317 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:36.317 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:36.317 00:10:36.317 Get Feature FDP: 00:10:36.317 ================ 00:10:36.317 Enabled: Yes 00:10:36.317 FDP configuration index: 0 00:10:36.317 00:10:36.317 FDP configurations log page 00:10:36.317 =========================== 00:10:36.317 Number of FDP configurations: 1 00:10:36.317 Version: 0 00:10:36.317 Size: 112 00:10:36.317 FDP Configuration Descriptor: 0 00:10:36.317 Descriptor Size: 96 00:10:36.317 Reclaim Group Identifier format: 2 00:10:36.317 FDP Volatile Write Cache: Not Present 00:10:36.317 FDP Configuration: Valid 00:10:36.317 Vendor Specific Size: 0 00:10:36.317 Number of Reclaim Groups: 2 00:10:36.317 Number of Recalim Unit Handles: 8 00:10:36.317 Max Placement Identifiers: 128 00:10:36.317 Number of Namespaces Suppprted: 256 00:10:36.317 Reclaim unit Nominal Size: 6000000 bytes 00:10:36.317 Estimated Reclaim Unit Time Limit: Not Reported 00:10:36.317 RUH Desc #000: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #001: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #002: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #003: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #004: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #005: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #006: RUH Type: Initially Isolated 00:10:36.317 RUH Desc #007: RUH Type: Initially Isolated 00:10:36.317 00:10:36.317 FDP reclaim unit handle usage log page 00:10:36.317 ====================================== 00:10:36.317 Number of Reclaim Unit Handles: 8 00:10:36.317 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:36.317 RUH Usage Desc #001: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #002: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #003: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #004: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #005: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #006: RUH Attributes: Unused 00:10:36.317 RUH Usage Desc #007: RUH Attributes: Unused 00:10:36.317 00:10:36.317 FDP statistics log page 00:10:36.317 ======================= 00:10:36.317 Host bytes with metadata written: 631021568 00:10:36.317 Media bytes with metadata written: 631103488 00:10:36.317 Media bytes erased: 0 00:10:36.317 00:10:36.317 FDP events log page 00:10:36.317 =================== 00:10:36.317 Number of FDP events: 0 00:10:36.317 00:10:36.317 NVM Specific Namespace Data 00:10:36.317 =========================== 00:10:36.317 Logical Block Storage Tag Mask: 0 00:10:36.317 Protection Information Capabilities: 00:10:36.317 16b Guard Protection Information Storage Tag Support: No 00:10:36.317 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:36.317 Storage Tag Check Read Support: No 00:10:36.317 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.317 ************************************ 00:10:36.317 END TEST nvme_identify 00:10:36.317 ************************************ 00:10:36.317 00:10:36.317 real 0m1.750s 00:10:36.317 user 0m0.641s 00:10:36.317 sys 0m0.883s 00:10:36.317 14:16:01 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.317 14:16:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:36.317 14:16:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:36.317 14:16:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.317 14:16:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.317 14:16:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.317 ************************************ 00:10:36.317 START TEST nvme_perf 00:10:36.317 ************************************ 00:10:36.317 14:16:01 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:10:36.317 14:16:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:37.696 Initializing NVMe Controllers 00:10:37.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:37.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:37.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:37.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:37.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:37.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:37.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:37.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:37.696 Initialization complete. Launching workers. 00:10:37.696 ======================================================== 00:10:37.696 Latency(us) 00:10:37.696 Device Information : IOPS MiB/s Average min max 00:10:37.696 PCIE (0000:00:10.0) NSID 1 from core 0: 14348.82 168.15 8939.73 7646.71 49809.18 00:10:37.696 PCIE (0000:00:11.0) NSID 1 from core 0: 14348.82 168.15 8923.25 7746.45 47354.25 00:10:37.696 PCIE (0000:00:13.0) NSID 1 from core 0: 14348.82 168.15 8906.10 7750.05 45520.24 00:10:37.696 PCIE (0000:00:12.0) NSID 1 from core 0: 14348.82 168.15 8889.03 7693.67 43176.69 00:10:37.696 PCIE (0000:00:12.0) NSID 2 from core 0: 14348.82 168.15 8871.57 7709.10 40876.07 00:10:37.696 PCIE (0000:00:12.0) NSID 3 from core 0: 14412.59 168.90 8815.39 7729.83 33812.69 00:10:37.696 ======================================================== 00:10:37.696 Total : 86156.68 1009.65 8890.79 7646.71 49809.18 00:10:37.696 00:10:37.696 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:37.696 ================================================================================= 00:10:37.696 1.00000% : 7790.625us 00:10:37.696 10.00000% : 8053.822us 00:10:37.696 25.00000% : 8264.379us 00:10:37.696 50.00000% : 8527.576us 00:10:37.696 75.00000% : 8843.412us 00:10:37.696 90.00000% : 9053.969us 00:10:37.696 95.00000% : 9264.527us 00:10:37.696 98.00000% : 11001.626us 00:10:37.696 99.00000% : 19055.447us 00:10:37.696 99.50000% : 42953.716us 00:10:37.696 99.90000% : 49480.996us 00:10:37.696 99.99000% : 49902.111us 00:10:37.696 99.99900% : 49902.111us 00:10:37.696 99.99990% : 49902.111us 00:10:37.696 99.99999% : 49902.111us 00:10:37.696 00:10:37.696 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:37.696 ================================================================================= 00:10:37.696 1.00000% : 7895.904us 00:10:37.696 10.00000% : 8106.461us 00:10:37.696 25.00000% : 8317.018us 00:10:37.696 50.00000% : 8527.576us 00:10:37.696 75.00000% : 8790.773us 00:10:37.696 90.00000% : 9001.330us 00:10:37.696 95.00000% : 9211.888us 00:10:37.696 98.00000% : 11528.019us 00:10:37.696 99.00000% : 18529.054us 00:10:37.696 99.50000% : 40637.584us 00:10:37.696 99.90000% : 46954.307us 00:10:37.696 99.99000% : 47375.422us 00:10:37.696 99.99900% : 47375.422us 00:10:37.696 99.99990% : 47375.422us 00:10:37.696 99.99999% : 47375.422us 00:10:37.696 00:10:37.696 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:37.696 ================================================================================= 00:10:37.696 1.00000% : 7895.904us 00:10:37.696 10.00000% : 8106.461us 00:10:37.696 25.00000% : 8317.018us 00:10:37.696 50.00000% : 8527.576us 00:10:37.696 75.00000% : 8790.773us 00:10:37.696 90.00000% : 9001.330us 00:10:37.696 95.00000% : 9211.888us 00:10:37.696 98.00000% : 11633.298us 00:10:37.696 99.00000% : 18739.611us 00:10:37.696 99.50000% : 38953.124us 00:10:37.696 99.90000% : 45269.847us 00:10:37.696 99.99000% : 45690.962us 00:10:37.696 99.99900% : 45690.962us 00:10:37.696 99.99990% : 45690.962us 00:10:37.696 99.99999% : 45690.962us 00:10:37.696 00:10:37.696 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:37.696 ================================================================================= 00:10:37.696 1.00000% : 7895.904us 00:10:37.696 10.00000% : 8106.461us 00:10:37.696 25.00000% : 8317.018us 00:10:37.696 50.00000% : 8527.576us 00:10:37.696 75.00000% : 8790.773us 00:10:37.696 90.00000% : 9001.330us 00:10:37.696 95.00000% : 9211.888us 00:10:37.696 98.00000% : 12054.413us 00:10:37.696 99.00000% : 18739.611us 00:10:37.696 99.50000% : 36636.993us 00:10:37.696 99.90000% : 42953.716us 00:10:37.696 99.99000% : 43164.273us 00:10:37.696 99.99900% : 43374.831us 00:10:37.696 99.99990% : 43374.831us 00:10:37.696 99.99999% : 43374.831us 00:10:37.696 00:10:37.696 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:37.696 ================================================================================= 00:10:37.696 1.00000% : 7895.904us 00:10:37.696 10.00000% : 8106.461us 00:10:37.696 25.00000% : 8264.379us 00:10:37.696 50.00000% : 8527.576us 00:10:37.696 75.00000% : 8790.773us 00:10:37.696 90.00000% : 9001.330us 00:10:37.696 95.00000% : 9211.888us 00:10:37.696 98.00000% : 12370.249us 00:10:37.696 99.00000% : 18844.890us 00:10:37.696 99.50000% : 34320.861us 00:10:37.697 99.90000% : 40637.584us 00:10:37.697 99.99000% : 40848.141us 00:10:37.697 99.99900% : 41058.699us 00:10:37.697 99.99990% : 41058.699us 00:10:37.697 99.99999% : 41058.699us 00:10:37.697 00:10:37.697 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:37.697 ================================================================================= 00:10:37.697 1.00000% : 7895.904us 00:10:37.697 10.00000% : 8106.461us 00:10:37.697 25.00000% : 8317.018us 00:10:37.697 50.00000% : 8527.576us 00:10:37.697 75.00000% : 8790.773us 00:10:37.697 90.00000% : 9001.330us 00:10:37.697 95.00000% : 9211.888us 00:10:37.697 98.00000% : 12738.724us 00:10:37.697 99.00000% : 18844.890us 00:10:37.697 99.50000% : 27161.908us 00:10:37.697 99.90000% : 33478.631us 00:10:37.697 99.99000% : 33899.746us 00:10:37.697 99.99900% : 33899.746us 00:10:37.697 99.99990% : 33899.746us 00:10:37.697 99.99999% : 33899.746us 00:10:37.697 00:10:37.697 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:37.697 ============================================================================== 00:10:37.697 Range in us Cumulative IO count 00:10:37.697 7632.707 - 7685.346: 0.0833% ( 12) 00:10:37.697 7685.346 - 7737.986: 0.4306% ( 50) 00:10:37.697 7737.986 - 7790.625: 1.0694% ( 92) 00:10:37.697 7790.625 - 7843.264: 2.1736% ( 159) 00:10:37.697 7843.264 - 7895.904: 3.6319% ( 210) 00:10:37.697 7895.904 - 7948.543: 5.8889% ( 325) 00:10:37.697 7948.543 - 8001.182: 8.7222% ( 408) 00:10:37.697 8001.182 - 8053.822: 11.9722% ( 468) 00:10:37.697 8053.822 - 8106.461: 15.7153% ( 539) 00:10:37.697 8106.461 - 8159.100: 19.5000% ( 545) 00:10:37.697 8159.100 - 8211.740: 23.6319% ( 595) 00:10:37.697 8211.740 - 8264.379: 27.8194% ( 603) 00:10:37.697 8264.379 - 8317.018: 32.1736% ( 627) 00:10:37.697 8317.018 - 8369.658: 36.7500% ( 659) 00:10:37.697 8369.658 - 8422.297: 41.4375% ( 675) 00:10:37.697 8422.297 - 8474.937: 46.0833% ( 669) 00:10:37.697 8474.937 - 8527.576: 50.7778% ( 676) 00:10:37.697 8527.576 - 8580.215: 55.4792% ( 677) 00:10:37.697 8580.215 - 8632.855: 60.1528% ( 673) 00:10:37.697 8632.855 - 8685.494: 64.8958% ( 683) 00:10:37.697 8685.494 - 8738.133: 69.5417% ( 669) 00:10:37.697 8738.133 - 8790.773: 73.9653% ( 637) 00:10:37.697 8790.773 - 8843.412: 77.9583% ( 575) 00:10:37.697 8843.412 - 8896.051: 81.8750% ( 564) 00:10:37.697 8896.051 - 8948.691: 85.3472% ( 500) 00:10:37.697 8948.691 - 9001.330: 88.3403% ( 431) 00:10:37.697 9001.330 - 9053.969: 90.6319% ( 330) 00:10:37.697 9053.969 - 9106.609: 92.4097% ( 256) 00:10:37.697 9106.609 - 9159.248: 93.7778% ( 197) 00:10:37.697 9159.248 - 9211.888: 94.6736% ( 129) 00:10:37.697 9211.888 - 9264.527: 95.5069% ( 120) 00:10:37.697 9264.527 - 9317.166: 96.2569% ( 108) 00:10:37.697 9317.166 - 9369.806: 96.6806% ( 61) 00:10:37.697 9369.806 - 9422.445: 96.9861% ( 44) 00:10:37.697 9422.445 - 9475.084: 97.1736% ( 27) 00:10:37.697 9475.084 - 9527.724: 97.3125% ( 20) 00:10:37.697 9527.724 - 9580.363: 97.3889% ( 11) 00:10:37.697 9580.363 - 9633.002: 97.4722% ( 12) 00:10:37.697 9633.002 - 9685.642: 97.5000% ( 4) 00:10:37.697 9685.642 - 9738.281: 97.5347% ( 5) 00:10:37.697 9738.281 - 9790.920: 97.5625% ( 4) 00:10:37.697 9790.920 - 9843.560: 97.5833% ( 3) 00:10:37.697 9843.560 - 9896.199: 97.6042% ( 3) 00:10:37.697 9896.199 - 9948.839: 97.6250% ( 3) 00:10:37.697 9948.839 - 10001.478: 97.6458% ( 3) 00:10:37.697 10001.478 - 10054.117: 97.6736% ( 4) 00:10:37.697 10054.117 - 10106.757: 97.6875% ( 2) 00:10:37.697 10106.757 - 10159.396: 97.7014% ( 2) 00:10:37.697 10159.396 - 10212.035: 97.7222% ( 3) 00:10:37.697 10212.035 - 10264.675: 97.7639% ( 6) 00:10:37.697 10264.675 - 10317.314: 97.8125% ( 7) 00:10:37.697 10317.314 - 10369.953: 97.8264% ( 2) 00:10:37.697 10369.953 - 10422.593: 97.8403% ( 2) 00:10:37.697 10422.593 - 10475.232: 97.8542% ( 2) 00:10:37.697 10475.232 - 10527.871: 97.8681% ( 2) 00:10:37.697 10527.871 - 10580.511: 97.8819% ( 2) 00:10:37.697 10580.511 - 10633.150: 97.9028% ( 3) 00:10:37.697 10633.150 - 10685.790: 97.9167% ( 2) 00:10:37.697 10685.790 - 10738.429: 97.9306% ( 2) 00:10:37.697 10738.429 - 10791.068: 97.9375% ( 1) 00:10:37.697 10791.068 - 10843.708: 97.9514% ( 2) 00:10:37.697 10843.708 - 10896.347: 97.9722% ( 3) 00:10:37.697 10896.347 - 10948.986: 97.9792% ( 1) 00:10:37.697 10948.986 - 11001.626: 98.0000% ( 3) 00:10:37.697 11001.626 - 11054.265: 98.0069% ( 1) 00:10:37.697 11054.265 - 11106.904: 98.0278% ( 3) 00:10:37.697 11106.904 - 11159.544: 98.0417% ( 2) 00:10:37.697 11159.544 - 11212.183: 98.0556% ( 2) 00:10:37.697 11212.183 - 11264.822: 98.0694% ( 2) 00:10:37.697 11264.822 - 11317.462: 98.0833% ( 2) 00:10:37.697 11317.462 - 11370.101: 98.0972% ( 2) 00:10:37.697 11370.101 - 11422.741: 98.1111% ( 2) 00:10:37.697 11422.741 - 11475.380: 98.1319% ( 3) 00:10:37.697 11475.380 - 11528.019: 98.1389% ( 1) 00:10:37.697 11528.019 - 11580.659: 98.1597% ( 3) 00:10:37.697 11580.659 - 11633.298: 98.1736% ( 2) 00:10:37.697 11633.298 - 11685.937: 98.1875% ( 2) 00:10:37.697 11685.937 - 11738.577: 98.2014% ( 2) 00:10:37.697 11738.577 - 11791.216: 98.2222% ( 3) 00:10:37.697 17581.545 - 17686.824: 98.2431% ( 3) 00:10:37.697 17686.824 - 17792.103: 98.2986% ( 8) 00:10:37.697 17792.103 - 17897.382: 98.3333% ( 5) 00:10:37.697 17897.382 - 18002.660: 98.3750% ( 6) 00:10:37.697 18002.660 - 18107.939: 98.4375% ( 9) 00:10:37.697 18107.939 - 18213.218: 98.5000% ( 9) 00:10:37.697 18213.218 - 18318.496: 98.5833% ( 12) 00:10:37.697 18318.496 - 18423.775: 98.6597% ( 11) 00:10:37.697 18423.775 - 18529.054: 98.7431% ( 12) 00:10:37.697 18529.054 - 18634.333: 98.8194% ( 11) 00:10:37.697 18634.333 - 18739.611: 98.8958% ( 11) 00:10:37.697 18739.611 - 18844.890: 98.9444% ( 7) 00:10:37.697 18844.890 - 18950.169: 98.9792% ( 5) 00:10:37.697 18950.169 - 19055.447: 99.0278% ( 7) 00:10:37.697 19055.447 - 19160.726: 99.0625% ( 5) 00:10:37.697 19160.726 - 19266.005: 99.0972% ( 5) 00:10:37.697 19266.005 - 19371.284: 99.1111% ( 2) 00:10:37.697 40848.141 - 41058.699: 99.1528% ( 6) 00:10:37.697 41058.699 - 41269.256: 99.1944% ( 6) 00:10:37.697 41269.256 - 41479.814: 99.2222% ( 4) 00:10:37.697 41479.814 - 41690.371: 99.2708% ( 7) 00:10:37.697 41690.371 - 41900.929: 99.3194% ( 7) 00:10:37.697 41900.929 - 42111.486: 99.3611% ( 6) 00:10:37.697 42111.486 - 42322.043: 99.3958% ( 5) 00:10:37.697 42322.043 - 42532.601: 99.4514% ( 8) 00:10:37.697 42532.601 - 42743.158: 99.4931% ( 6) 00:10:37.697 42743.158 - 42953.716: 99.5347% ( 6) 00:10:37.697 42953.716 - 43164.273: 99.5556% ( 3) 00:10:37.697 47585.979 - 47796.537: 99.5694% ( 2) 00:10:37.697 47796.537 - 48007.094: 99.6181% ( 7) 00:10:37.697 48007.094 - 48217.651: 99.6597% ( 6) 00:10:37.697 48217.651 - 48428.209: 99.7014% ( 6) 00:10:37.697 48428.209 - 48638.766: 99.7500% ( 7) 00:10:37.697 48638.766 - 48849.324: 99.7917% ( 6) 00:10:37.697 48849.324 - 49059.881: 99.8403% ( 7) 00:10:37.697 49059.881 - 49270.439: 99.8889% ( 7) 00:10:37.697 49270.439 - 49480.996: 99.9375% ( 7) 00:10:37.697 49480.996 - 49691.553: 99.9792% ( 6) 00:10:37.697 49691.553 - 49902.111: 100.0000% ( 3) 00:10:37.697 00:10:37.697 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:37.697 ============================================================================== 00:10:37.697 Range in us Cumulative IO count 00:10:37.697 7737.986 - 7790.625: 0.1667% ( 24) 00:10:37.697 7790.625 - 7843.264: 0.6667% ( 72) 00:10:37.697 7843.264 - 7895.904: 1.6250% ( 138) 00:10:37.697 7895.904 - 7948.543: 3.1528% ( 220) 00:10:37.697 7948.543 - 8001.182: 5.0625% ( 275) 00:10:37.697 8001.182 - 8053.822: 7.9167% ( 411) 00:10:37.697 8053.822 - 8106.461: 11.5069% ( 517) 00:10:37.697 8106.461 - 8159.100: 15.3889% ( 559) 00:10:37.697 8159.100 - 8211.740: 19.9792% ( 661) 00:10:37.697 8211.740 - 8264.379: 24.6458% ( 672) 00:10:37.697 8264.379 - 8317.018: 29.7292% ( 732) 00:10:37.697 8317.018 - 8369.658: 34.7778% ( 727) 00:10:37.697 8369.658 - 8422.297: 40.2639% ( 790) 00:10:37.697 8422.297 - 8474.937: 45.5417% ( 760) 00:10:37.697 8474.937 - 8527.576: 50.8958% ( 771) 00:10:37.697 8527.576 - 8580.215: 56.5069% ( 808) 00:10:37.697 8580.215 - 8632.855: 61.9444% ( 783) 00:10:37.697 8632.855 - 8685.494: 67.1042% ( 743) 00:10:37.697 8685.494 - 8738.133: 72.2361% ( 739) 00:10:37.697 8738.133 - 8790.773: 76.8889% ( 670) 00:10:37.697 8790.773 - 8843.412: 80.9722% ( 588) 00:10:37.697 8843.412 - 8896.051: 84.7431% ( 543) 00:10:37.697 8896.051 - 8948.691: 87.8472% ( 447) 00:10:37.697 8948.691 - 9001.330: 90.3542% ( 361) 00:10:37.697 9001.330 - 9053.969: 92.1944% ( 265) 00:10:37.697 9053.969 - 9106.609: 93.5347% ( 193) 00:10:37.697 9106.609 - 9159.248: 94.5764% ( 150) 00:10:37.697 9159.248 - 9211.888: 95.4028% ( 119) 00:10:37.697 9211.888 - 9264.527: 96.0903% ( 99) 00:10:37.697 9264.527 - 9317.166: 96.4722% ( 55) 00:10:37.697 9317.166 - 9369.806: 96.7569% ( 41) 00:10:37.697 9369.806 - 9422.445: 96.9514% ( 28) 00:10:37.697 9422.445 - 9475.084: 97.1181% ( 24) 00:10:37.697 9475.084 - 9527.724: 97.2431% ( 18) 00:10:37.697 9527.724 - 9580.363: 97.3542% ( 16) 00:10:37.697 9580.363 - 9633.002: 97.4167% ( 9) 00:10:37.697 9633.002 - 9685.642: 97.4722% ( 8) 00:10:37.697 9685.642 - 9738.281: 97.5347% ( 9) 00:10:37.697 9738.281 - 9790.920: 97.5903% ( 8) 00:10:37.697 9790.920 - 9843.560: 97.6458% ( 8) 00:10:37.697 9843.560 - 9896.199: 97.6944% ( 7) 00:10:37.697 9896.199 - 9948.839: 97.7292% ( 5) 00:10:37.697 9948.839 - 10001.478: 97.7639% ( 5) 00:10:37.697 10001.478 - 10054.117: 97.7778% ( 2) 00:10:37.697 10685.790 - 10738.429: 97.7917% ( 2) 00:10:37.697 10738.429 - 10791.068: 97.7986% ( 1) 00:10:37.697 10791.068 - 10843.708: 97.8125% ( 2) 00:10:37.697 10843.708 - 10896.347: 97.8264% ( 2) 00:10:37.697 10896.347 - 10948.986: 97.8403% ( 2) 00:10:37.697 10948.986 - 11001.626: 97.8542% ( 2) 00:10:37.697 11001.626 - 11054.265: 97.8750% ( 3) 00:10:37.697 11054.265 - 11106.904: 97.8889% ( 2) 00:10:37.697 11106.904 - 11159.544: 97.9028% ( 2) 00:10:37.697 11159.544 - 11212.183: 97.9236% ( 3) 00:10:37.698 11212.183 - 11264.822: 97.9444% ( 3) 00:10:37.698 11264.822 - 11317.462: 97.9514% ( 1) 00:10:37.698 11317.462 - 11370.101: 97.9653% ( 2) 00:10:37.698 11370.101 - 11422.741: 97.9792% ( 2) 00:10:37.698 11422.741 - 11475.380: 97.9931% ( 2) 00:10:37.698 11475.380 - 11528.019: 98.0069% ( 2) 00:10:37.698 11528.019 - 11580.659: 98.0208% ( 2) 00:10:37.698 11580.659 - 11633.298: 98.0347% ( 2) 00:10:37.698 11633.298 - 11685.937: 98.0486% ( 2) 00:10:37.698 11685.937 - 11738.577: 98.0694% ( 3) 00:10:37.698 11738.577 - 11791.216: 98.0833% ( 2) 00:10:37.698 11791.216 - 11843.855: 98.0972% ( 2) 00:10:37.698 11843.855 - 11896.495: 98.1111% ( 2) 00:10:37.698 11896.495 - 11949.134: 98.1319% ( 3) 00:10:37.698 11949.134 - 12001.773: 98.1458% ( 2) 00:10:37.698 12001.773 - 12054.413: 98.1597% ( 2) 00:10:37.698 12054.413 - 12107.052: 98.1736% ( 2) 00:10:37.698 12107.052 - 12159.692: 98.1806% ( 1) 00:10:37.698 12159.692 - 12212.331: 98.2014% ( 3) 00:10:37.698 12212.331 - 12264.970: 98.2083% ( 1) 00:10:37.698 12264.970 - 12317.610: 98.2222% ( 2) 00:10:37.698 17160.431 - 17265.709: 98.2292% ( 1) 00:10:37.698 17265.709 - 17370.988: 98.2778% ( 7) 00:10:37.698 17370.988 - 17476.267: 98.3125% ( 5) 00:10:37.698 17476.267 - 17581.545: 98.3681% ( 8) 00:10:37.698 17581.545 - 17686.824: 98.4097% ( 6) 00:10:37.698 17686.824 - 17792.103: 98.4583% ( 7) 00:10:37.698 17792.103 - 17897.382: 98.5625% ( 15) 00:10:37.698 17897.382 - 18002.660: 98.6736% ( 16) 00:10:37.698 18002.660 - 18107.939: 98.7639% ( 13) 00:10:37.698 18107.939 - 18213.218: 98.8403% ( 11) 00:10:37.698 18213.218 - 18318.496: 98.9097% ( 10) 00:10:37.698 18318.496 - 18423.775: 98.9583% ( 7) 00:10:37.698 18423.775 - 18529.054: 99.0000% ( 6) 00:10:37.698 18529.054 - 18634.333: 99.0486% ( 7) 00:10:37.698 18634.333 - 18739.611: 99.0833% ( 5) 00:10:37.698 18739.611 - 18844.890: 99.1111% ( 4) 00:10:37.698 38742.567 - 38953.124: 99.1389% ( 4) 00:10:37.698 38953.124 - 39163.682: 99.1806% ( 6) 00:10:37.698 39163.682 - 39374.239: 99.2292% ( 7) 00:10:37.698 39374.239 - 39584.797: 99.2708% ( 6) 00:10:37.698 39584.797 - 39795.354: 99.3125% ( 6) 00:10:37.698 39795.354 - 40005.912: 99.3681% ( 8) 00:10:37.698 40005.912 - 40216.469: 99.4097% ( 6) 00:10:37.698 40216.469 - 40427.027: 99.4653% ( 8) 00:10:37.698 40427.027 - 40637.584: 99.5069% ( 6) 00:10:37.698 40637.584 - 40848.141: 99.5486% ( 6) 00:10:37.698 40848.141 - 41058.699: 99.5556% ( 1) 00:10:37.698 45269.847 - 45480.405: 99.5625% ( 1) 00:10:37.698 45480.405 - 45690.962: 99.6181% ( 8) 00:10:37.698 45690.962 - 45901.520: 99.6667% ( 7) 00:10:37.698 45901.520 - 46112.077: 99.7083% ( 6) 00:10:37.698 46112.077 - 46322.635: 99.7569% ( 7) 00:10:37.698 46322.635 - 46533.192: 99.8056% ( 7) 00:10:37.698 46533.192 - 46743.749: 99.8472% ( 6) 00:10:37.698 46743.749 - 46954.307: 99.9028% ( 8) 00:10:37.698 46954.307 - 47164.864: 99.9514% ( 7) 00:10:37.698 47164.864 - 47375.422: 100.0000% ( 7) 00:10:37.698 00:10:37.698 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:37.698 ============================================================================== 00:10:37.698 Range in us Cumulative IO count 00:10:37.698 7737.986 - 7790.625: 0.1181% ( 17) 00:10:37.698 7790.625 - 7843.264: 0.6806% ( 81) 00:10:37.698 7843.264 - 7895.904: 1.6875% ( 145) 00:10:37.698 7895.904 - 7948.543: 3.1667% ( 213) 00:10:37.698 7948.543 - 8001.182: 5.1875% ( 291) 00:10:37.698 8001.182 - 8053.822: 7.8611% ( 385) 00:10:37.698 8053.822 - 8106.461: 11.4931% ( 523) 00:10:37.698 8106.461 - 8159.100: 15.5556% ( 585) 00:10:37.698 8159.100 - 8211.740: 20.1111% ( 656) 00:10:37.698 8211.740 - 8264.379: 24.8681% ( 685) 00:10:37.698 8264.379 - 8317.018: 29.8542% ( 718) 00:10:37.698 8317.018 - 8369.658: 34.9583% ( 735) 00:10:37.698 8369.658 - 8422.297: 40.1528% ( 748) 00:10:37.698 8422.297 - 8474.937: 45.5208% ( 773) 00:10:37.698 8474.937 - 8527.576: 51.0278% ( 793) 00:10:37.698 8527.576 - 8580.215: 56.5556% ( 796) 00:10:37.698 8580.215 - 8632.855: 62.0833% ( 796) 00:10:37.698 8632.855 - 8685.494: 67.4861% ( 778) 00:10:37.698 8685.494 - 8738.133: 72.5556% ( 730) 00:10:37.698 8738.133 - 8790.773: 77.1944% ( 668) 00:10:37.698 8790.773 - 8843.412: 81.4514% ( 613) 00:10:37.698 8843.412 - 8896.051: 85.1736% ( 536) 00:10:37.698 8896.051 - 8948.691: 88.2639% ( 445) 00:10:37.698 8948.691 - 9001.330: 90.6875% ( 349) 00:10:37.698 9001.330 - 9053.969: 92.4028% ( 247) 00:10:37.698 9053.969 - 9106.609: 93.7014% ( 187) 00:10:37.698 9106.609 - 9159.248: 94.7292% ( 148) 00:10:37.698 9159.248 - 9211.888: 95.5625% ( 120) 00:10:37.698 9211.888 - 9264.527: 96.2153% ( 94) 00:10:37.698 9264.527 - 9317.166: 96.6875% ( 68) 00:10:37.698 9317.166 - 9369.806: 96.9653% ( 40) 00:10:37.698 9369.806 - 9422.445: 97.1806% ( 31) 00:10:37.698 9422.445 - 9475.084: 97.2986% ( 17) 00:10:37.698 9475.084 - 9527.724: 97.4167% ( 17) 00:10:37.698 9527.724 - 9580.363: 97.5000% ( 12) 00:10:37.698 9580.363 - 9633.002: 97.5556% ( 8) 00:10:37.698 9633.002 - 9685.642: 97.6042% ( 7) 00:10:37.698 9685.642 - 9738.281: 97.6319% ( 4) 00:10:37.698 9738.281 - 9790.920: 97.6667% ( 5) 00:10:37.698 9790.920 - 9843.560: 97.6944% ( 4) 00:10:37.698 9843.560 - 9896.199: 97.7361% ( 6) 00:10:37.698 9896.199 - 9948.839: 97.7500% ( 2) 00:10:37.698 9948.839 - 10001.478: 97.7708% ( 3) 00:10:37.698 10001.478 - 10054.117: 97.7778% ( 1) 00:10:37.698 10843.708 - 10896.347: 97.7917% ( 2) 00:10:37.698 10896.347 - 10948.986: 97.8056% ( 2) 00:10:37.698 10948.986 - 11001.626: 97.8194% ( 2) 00:10:37.698 11001.626 - 11054.265: 97.8403% ( 3) 00:10:37.698 11054.265 - 11106.904: 97.8542% ( 2) 00:10:37.698 11106.904 - 11159.544: 97.8750% ( 3) 00:10:37.698 11159.544 - 11212.183: 97.8819% ( 1) 00:10:37.698 11212.183 - 11264.822: 97.8958% ( 2) 00:10:37.698 11264.822 - 11317.462: 97.9097% ( 2) 00:10:37.698 11317.462 - 11370.101: 97.9236% ( 2) 00:10:37.698 11370.101 - 11422.741: 97.9375% ( 2) 00:10:37.698 11422.741 - 11475.380: 97.9514% ( 2) 00:10:37.698 11475.380 - 11528.019: 97.9722% ( 3) 00:10:37.698 11528.019 - 11580.659: 97.9861% ( 2) 00:10:37.698 11580.659 - 11633.298: 98.0000% ( 2) 00:10:37.698 11633.298 - 11685.937: 98.0139% ( 2) 00:10:37.698 11685.937 - 11738.577: 98.0347% ( 3) 00:10:37.698 11738.577 - 11791.216: 98.0486% ( 2) 00:10:37.698 11791.216 - 11843.855: 98.0694% ( 3) 00:10:37.698 11843.855 - 11896.495: 98.0833% ( 2) 00:10:37.698 11896.495 - 11949.134: 98.0972% ( 2) 00:10:37.698 11949.134 - 12001.773: 98.1181% ( 3) 00:10:37.698 12001.773 - 12054.413: 98.1319% ( 2) 00:10:37.698 12054.413 - 12107.052: 98.1458% ( 2) 00:10:37.698 12107.052 - 12159.692: 98.1597% ( 2) 00:10:37.698 12159.692 - 12212.331: 98.1806% ( 3) 00:10:37.698 12212.331 - 12264.970: 98.1944% ( 2) 00:10:37.698 12264.970 - 12317.610: 98.2083% ( 2) 00:10:37.698 12317.610 - 12370.249: 98.2222% ( 2) 00:10:37.698 17370.988 - 17476.267: 98.2569% ( 5) 00:10:37.698 17476.267 - 17581.545: 98.2917% ( 5) 00:10:37.698 17581.545 - 17686.824: 98.3333% ( 6) 00:10:37.698 17686.824 - 17792.103: 98.3750% ( 6) 00:10:37.698 17792.103 - 17897.382: 98.4167% ( 6) 00:10:37.698 17897.382 - 18002.660: 98.5069% ( 13) 00:10:37.698 18002.660 - 18107.939: 98.5903% ( 12) 00:10:37.698 18107.939 - 18213.218: 98.6875% ( 14) 00:10:37.698 18213.218 - 18318.496: 98.7847% ( 14) 00:10:37.698 18318.496 - 18423.775: 98.8681% ( 12) 00:10:37.698 18423.775 - 18529.054: 98.9306% ( 9) 00:10:37.698 18529.054 - 18634.333: 98.9792% ( 7) 00:10:37.698 18634.333 - 18739.611: 99.0208% ( 6) 00:10:37.698 18739.611 - 18844.890: 99.0694% ( 7) 00:10:37.698 18844.890 - 18950.169: 99.1111% ( 6) 00:10:37.698 37058.108 - 37268.665: 99.1528% ( 6) 00:10:37.698 37268.665 - 37479.222: 99.2014% ( 7) 00:10:37.698 37479.222 - 37689.780: 99.2431% ( 6) 00:10:37.698 37689.780 - 37900.337: 99.2917% ( 7) 00:10:37.698 37900.337 - 38110.895: 99.3333% ( 6) 00:10:37.698 38110.895 - 38321.452: 99.3819% ( 7) 00:10:37.698 38321.452 - 38532.010: 99.4306% ( 7) 00:10:37.698 38532.010 - 38742.567: 99.4792% ( 7) 00:10:37.698 38742.567 - 38953.124: 99.5278% ( 7) 00:10:37.698 38953.124 - 39163.682: 99.5556% ( 4) 00:10:37.698 43374.831 - 43585.388: 99.5625% ( 1) 00:10:37.698 43585.388 - 43795.945: 99.6111% ( 7) 00:10:37.698 43795.945 - 44006.503: 99.6597% ( 7) 00:10:37.698 44006.503 - 44217.060: 99.7014% ( 6) 00:10:37.698 44217.060 - 44427.618: 99.7500% ( 7) 00:10:37.698 44427.618 - 44638.175: 99.7986% ( 7) 00:10:37.698 44638.175 - 44848.733: 99.8403% ( 6) 00:10:37.698 44848.733 - 45059.290: 99.8958% ( 8) 00:10:37.698 45059.290 - 45269.847: 99.9375% ( 6) 00:10:37.698 45269.847 - 45480.405: 99.9861% ( 7) 00:10:37.698 45480.405 - 45690.962: 100.0000% ( 2) 00:10:37.698 00:10:37.698 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:37.698 ============================================================================== 00:10:37.698 Range in us Cumulative IO count 00:10:37.698 7685.346 - 7737.986: 0.0278% ( 4) 00:10:37.698 7737.986 - 7790.625: 0.1875% ( 23) 00:10:37.698 7790.625 - 7843.264: 0.6597% ( 68) 00:10:37.698 7843.264 - 7895.904: 1.6875% ( 148) 00:10:37.698 7895.904 - 7948.543: 3.1458% ( 210) 00:10:37.698 7948.543 - 8001.182: 4.9514% ( 260) 00:10:37.698 8001.182 - 8053.822: 7.8194% ( 413) 00:10:37.698 8053.822 - 8106.461: 11.2431% ( 493) 00:10:37.698 8106.461 - 8159.100: 15.3819% ( 596) 00:10:37.698 8159.100 - 8211.740: 19.9028% ( 651) 00:10:37.698 8211.740 - 8264.379: 24.8264% ( 709) 00:10:37.698 8264.379 - 8317.018: 29.8125% ( 718) 00:10:37.698 8317.018 - 8369.658: 34.8611% ( 727) 00:10:37.698 8369.658 - 8422.297: 40.1597% ( 763) 00:10:37.698 8422.297 - 8474.937: 45.5278% ( 773) 00:10:37.698 8474.937 - 8527.576: 51.0556% ( 796) 00:10:37.698 8527.576 - 8580.215: 56.6736% ( 809) 00:10:37.698 8580.215 - 8632.855: 62.2083% ( 797) 00:10:37.698 8632.855 - 8685.494: 67.6250% ( 780) 00:10:37.698 8685.494 - 8738.133: 72.7222% ( 734) 00:10:37.698 8738.133 - 8790.773: 77.3056% ( 660) 00:10:37.698 8790.773 - 8843.412: 81.5833% ( 616) 00:10:37.699 8843.412 - 8896.051: 85.3403% ( 541) 00:10:37.699 8896.051 - 8948.691: 88.3611% ( 435) 00:10:37.699 8948.691 - 9001.330: 90.6319% ( 327) 00:10:37.699 9001.330 - 9053.969: 92.3194% ( 243) 00:10:37.699 9053.969 - 9106.609: 93.5556% ( 178) 00:10:37.699 9106.609 - 9159.248: 94.6319% ( 155) 00:10:37.699 9159.248 - 9211.888: 95.5069% ( 126) 00:10:37.699 9211.888 - 9264.527: 96.1736% ( 96) 00:10:37.699 9264.527 - 9317.166: 96.6250% ( 65) 00:10:37.699 9317.166 - 9369.806: 96.9514% ( 47) 00:10:37.699 9369.806 - 9422.445: 97.1667% ( 31) 00:10:37.699 9422.445 - 9475.084: 97.2639% ( 14) 00:10:37.699 9475.084 - 9527.724: 97.3403% ( 11) 00:10:37.699 9527.724 - 9580.363: 97.4167% ( 11) 00:10:37.699 9580.363 - 9633.002: 97.4514% ( 5) 00:10:37.699 9633.002 - 9685.642: 97.4861% ( 5) 00:10:37.699 9685.642 - 9738.281: 97.5208% ( 5) 00:10:37.699 9738.281 - 9790.920: 97.5625% ( 6) 00:10:37.699 9790.920 - 9843.560: 97.5903% ( 4) 00:10:37.699 9843.560 - 9896.199: 97.6042% ( 2) 00:10:37.699 9896.199 - 9948.839: 97.6181% ( 2) 00:10:37.699 9948.839 - 10001.478: 97.6319% ( 2) 00:10:37.699 10001.478 - 10054.117: 97.6528% ( 3) 00:10:37.699 10054.117 - 10106.757: 97.6667% ( 2) 00:10:37.699 10106.757 - 10159.396: 97.6806% ( 2) 00:10:37.699 10159.396 - 10212.035: 97.6944% ( 2) 00:10:37.699 10212.035 - 10264.675: 97.7083% ( 2) 00:10:37.699 10264.675 - 10317.314: 97.7222% ( 2) 00:10:37.699 10317.314 - 10369.953: 97.7361% ( 2) 00:10:37.699 10369.953 - 10422.593: 97.7500% ( 2) 00:10:37.699 10422.593 - 10475.232: 97.7708% ( 3) 00:10:37.699 10475.232 - 10527.871: 97.7778% ( 1) 00:10:37.699 11317.462 - 11370.101: 97.7847% ( 1) 00:10:37.699 11370.101 - 11422.741: 97.8056% ( 3) 00:10:37.699 11422.741 - 11475.380: 97.8194% ( 2) 00:10:37.699 11475.380 - 11528.019: 97.8472% ( 4) 00:10:37.699 11528.019 - 11580.659: 97.8542% ( 1) 00:10:37.699 11580.659 - 11633.298: 97.8681% ( 2) 00:10:37.699 11633.298 - 11685.937: 97.8889% ( 3) 00:10:37.699 11685.937 - 11738.577: 97.9097% ( 3) 00:10:37.699 11738.577 - 11791.216: 97.9306% ( 3) 00:10:37.699 11791.216 - 11843.855: 97.9514% ( 3) 00:10:37.699 11843.855 - 11896.495: 97.9792% ( 4) 00:10:37.699 11896.495 - 11949.134: 97.9861% ( 1) 00:10:37.699 11949.134 - 12001.773: 97.9931% ( 1) 00:10:37.699 12001.773 - 12054.413: 98.0069% ( 2) 00:10:37.699 12054.413 - 12107.052: 98.0208% ( 2) 00:10:37.699 12107.052 - 12159.692: 98.0347% ( 2) 00:10:37.699 12159.692 - 12212.331: 98.0486% ( 2) 00:10:37.699 12212.331 - 12264.970: 98.0625% ( 2) 00:10:37.699 12264.970 - 12317.610: 98.0833% ( 3) 00:10:37.699 12317.610 - 12370.249: 98.0972% ( 2) 00:10:37.699 12370.249 - 12422.888: 98.1111% ( 2) 00:10:37.699 12422.888 - 12475.528: 98.1319% ( 3) 00:10:37.699 12475.528 - 12528.167: 98.1458% ( 2) 00:10:37.699 12528.167 - 12580.806: 98.1597% ( 2) 00:10:37.699 12580.806 - 12633.446: 98.1736% ( 2) 00:10:37.699 12633.446 - 12686.085: 98.1875% ( 2) 00:10:37.699 12686.085 - 12738.724: 98.2014% ( 2) 00:10:37.699 12738.724 - 12791.364: 98.2222% ( 3) 00:10:37.699 17370.988 - 17476.267: 98.2292% ( 1) 00:10:37.699 17476.267 - 17581.545: 98.2778% ( 7) 00:10:37.699 17581.545 - 17686.824: 98.3264% ( 7) 00:10:37.699 17686.824 - 17792.103: 98.3750% ( 7) 00:10:37.699 17792.103 - 17897.382: 98.4097% ( 5) 00:10:37.699 17897.382 - 18002.660: 98.4722% ( 9) 00:10:37.699 18002.660 - 18107.939: 98.5556% ( 12) 00:10:37.699 18107.939 - 18213.218: 98.6458% ( 13) 00:10:37.699 18213.218 - 18318.496: 98.7431% ( 14) 00:10:37.699 18318.496 - 18423.775: 98.8264% ( 12) 00:10:37.699 18423.775 - 18529.054: 98.9097% ( 12) 00:10:37.699 18529.054 - 18634.333: 98.9583% ( 7) 00:10:37.699 18634.333 - 18739.611: 99.0069% ( 7) 00:10:37.699 18739.611 - 18844.890: 99.0486% ( 6) 00:10:37.699 18844.890 - 18950.169: 99.0903% ( 6) 00:10:37.699 18950.169 - 19055.447: 99.1111% ( 3) 00:10:37.699 34741.976 - 34952.533: 99.1319% ( 3) 00:10:37.699 34952.533 - 35163.091: 99.1806% ( 7) 00:10:37.699 35163.091 - 35373.648: 99.2222% ( 6) 00:10:37.699 35373.648 - 35584.206: 99.2708% ( 7) 00:10:37.699 35584.206 - 35794.763: 99.3194% ( 7) 00:10:37.699 35794.763 - 36005.320: 99.3681% ( 7) 00:10:37.699 36005.320 - 36215.878: 99.4167% ( 7) 00:10:37.699 36215.878 - 36426.435: 99.4653% ( 7) 00:10:37.699 36426.435 - 36636.993: 99.5139% ( 7) 00:10:37.699 36636.993 - 36847.550: 99.5556% ( 6) 00:10:37.699 41058.699 - 41269.256: 99.5625% ( 1) 00:10:37.699 41269.256 - 41479.814: 99.6111% ( 7) 00:10:37.699 41479.814 - 41690.371: 99.6597% ( 7) 00:10:37.699 41690.371 - 41900.929: 99.7083% ( 7) 00:10:37.699 41900.929 - 42111.486: 99.7569% ( 7) 00:10:37.699 42111.486 - 42322.043: 99.8056% ( 7) 00:10:37.699 42322.043 - 42532.601: 99.8472% ( 6) 00:10:37.699 42532.601 - 42743.158: 99.8958% ( 7) 00:10:37.699 42743.158 - 42953.716: 99.9444% ( 7) 00:10:37.699 42953.716 - 43164.273: 99.9931% ( 7) 00:10:37.699 43164.273 - 43374.831: 100.0000% ( 1) 00:10:37.699 00:10:37.699 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:37.699 ============================================================================== 00:10:37.699 Range in us Cumulative IO count 00:10:37.699 7685.346 - 7737.986: 0.0347% ( 5) 00:10:37.699 7737.986 - 7790.625: 0.2292% ( 28) 00:10:37.699 7790.625 - 7843.264: 0.7500% ( 75) 00:10:37.699 7843.264 - 7895.904: 1.7986% ( 151) 00:10:37.699 7895.904 - 7948.543: 3.2083% ( 203) 00:10:37.699 7948.543 - 8001.182: 5.2847% ( 299) 00:10:37.699 8001.182 - 8053.822: 8.1736% ( 416) 00:10:37.699 8053.822 - 8106.461: 11.6181% ( 496) 00:10:37.699 8106.461 - 8159.100: 15.7639% ( 597) 00:10:37.699 8159.100 - 8211.740: 20.3542% ( 661) 00:10:37.699 8211.740 - 8264.379: 25.0139% ( 671) 00:10:37.699 8264.379 - 8317.018: 29.8750% ( 700) 00:10:37.699 8317.018 - 8369.658: 35.0139% ( 740) 00:10:37.699 8369.658 - 8422.297: 40.2222% ( 750) 00:10:37.699 8422.297 - 8474.937: 45.7014% ( 789) 00:10:37.699 8474.937 - 8527.576: 51.1458% ( 784) 00:10:37.699 8527.576 - 8580.215: 56.6806% ( 797) 00:10:37.699 8580.215 - 8632.855: 62.1042% ( 781) 00:10:37.699 8632.855 - 8685.494: 67.4931% ( 776) 00:10:37.699 8685.494 - 8738.133: 72.4653% ( 716) 00:10:37.699 8738.133 - 8790.773: 77.1806% ( 679) 00:10:37.699 8790.773 - 8843.412: 81.3264% ( 597) 00:10:37.699 8843.412 - 8896.051: 85.1042% ( 544) 00:10:37.699 8896.051 - 8948.691: 88.2014% ( 446) 00:10:37.699 8948.691 - 9001.330: 90.4861% ( 329) 00:10:37.699 9001.330 - 9053.969: 92.2222% ( 250) 00:10:37.699 9053.969 - 9106.609: 93.4722% ( 180) 00:10:37.699 9106.609 - 9159.248: 94.5556% ( 156) 00:10:37.699 9159.248 - 9211.888: 95.4583% ( 130) 00:10:37.699 9211.888 - 9264.527: 96.1597% ( 101) 00:10:37.699 9264.527 - 9317.166: 96.5903% ( 62) 00:10:37.699 9317.166 - 9369.806: 96.8819% ( 42) 00:10:37.699 9369.806 - 9422.445: 97.0764% ( 28) 00:10:37.699 9422.445 - 9475.084: 97.1875% ( 16) 00:10:37.699 9475.084 - 9527.724: 97.2361% ( 7) 00:10:37.699 9527.724 - 9580.363: 97.2847% ( 7) 00:10:37.699 9580.363 - 9633.002: 97.3194% ( 5) 00:10:37.699 9633.002 - 9685.642: 97.3403% ( 3) 00:10:37.699 9685.642 - 9738.281: 97.3819% ( 6) 00:10:37.699 9738.281 - 9790.920: 97.4167% ( 5) 00:10:37.699 9790.920 - 9843.560: 97.4444% ( 4) 00:10:37.699 9843.560 - 9896.199: 97.4653% ( 3) 00:10:37.699 9896.199 - 9948.839: 97.4792% ( 2) 00:10:37.699 9948.839 - 10001.478: 97.4931% ( 2) 00:10:37.699 10001.478 - 10054.117: 97.5069% ( 2) 00:10:37.699 10054.117 - 10106.757: 97.5208% ( 2) 00:10:37.699 10106.757 - 10159.396: 97.5486% ( 4) 00:10:37.699 10159.396 - 10212.035: 97.5625% ( 2) 00:10:37.699 10212.035 - 10264.675: 97.5764% ( 2) 00:10:37.699 10264.675 - 10317.314: 97.5903% ( 2) 00:10:37.699 10317.314 - 10369.953: 97.6042% ( 2) 00:10:37.699 10369.953 - 10422.593: 97.6181% ( 2) 00:10:37.699 10422.593 - 10475.232: 97.6319% ( 2) 00:10:37.699 10475.232 - 10527.871: 97.6458% ( 2) 00:10:37.699 10527.871 - 10580.511: 97.6597% ( 2) 00:10:37.699 10580.511 - 10633.150: 97.6736% ( 2) 00:10:37.699 10633.150 - 10685.790: 97.6875% ( 2) 00:10:37.699 10685.790 - 10738.429: 97.7083% ( 3) 00:10:37.699 10738.429 - 10791.068: 97.7153% ( 1) 00:10:37.699 10791.068 - 10843.708: 97.7361% ( 3) 00:10:37.699 10843.708 - 10896.347: 97.7500% ( 2) 00:10:37.699 10896.347 - 10948.986: 97.7708% ( 3) 00:10:37.699 10948.986 - 11001.626: 97.7778% ( 1) 00:10:37.699 11633.298 - 11685.937: 97.7847% ( 1) 00:10:37.699 11685.937 - 11738.577: 97.7986% ( 2) 00:10:37.699 11738.577 - 11791.216: 97.8194% ( 3) 00:10:37.699 11791.216 - 11843.855: 97.8333% ( 2) 00:10:37.699 11843.855 - 11896.495: 97.8472% ( 2) 00:10:37.699 11896.495 - 11949.134: 97.8681% ( 3) 00:10:37.699 11949.134 - 12001.773: 97.8819% ( 2) 00:10:37.699 12001.773 - 12054.413: 97.8958% ( 2) 00:10:37.699 12054.413 - 12107.052: 97.9097% ( 2) 00:10:37.699 12107.052 - 12159.692: 97.9306% ( 3) 00:10:37.699 12159.692 - 12212.331: 97.9514% ( 3) 00:10:37.699 12212.331 - 12264.970: 97.9653% ( 2) 00:10:37.699 12264.970 - 12317.610: 97.9861% ( 3) 00:10:37.699 12317.610 - 12370.249: 98.0000% ( 2) 00:10:37.699 12370.249 - 12422.888: 98.0139% ( 2) 00:10:37.699 12422.888 - 12475.528: 98.0278% ( 2) 00:10:37.699 12475.528 - 12528.167: 98.0417% ( 2) 00:10:37.699 12528.167 - 12580.806: 98.0625% ( 3) 00:10:37.699 12580.806 - 12633.446: 98.0764% ( 2) 00:10:37.699 12633.446 - 12686.085: 98.0903% ( 2) 00:10:37.699 12686.085 - 12738.724: 98.1111% ( 3) 00:10:37.699 12738.724 - 12791.364: 98.1250% ( 2) 00:10:37.699 12791.364 - 12844.003: 98.1389% ( 2) 00:10:37.699 12844.003 - 12896.643: 98.1597% ( 3) 00:10:37.699 12896.643 - 12949.282: 98.1736% ( 2) 00:10:37.699 12949.282 - 13001.921: 98.1944% ( 3) 00:10:37.699 13001.921 - 13054.561: 98.2083% ( 2) 00:10:37.699 13054.561 - 13107.200: 98.2222% ( 2) 00:10:37.699 17476.267 - 17581.545: 98.2639% ( 6) 00:10:37.699 17581.545 - 17686.824: 98.3056% ( 6) 00:10:37.699 17686.824 - 17792.103: 98.3542% ( 7) 00:10:37.699 17792.103 - 17897.382: 98.3958% ( 6) 00:10:37.700 17897.382 - 18002.660: 98.4375% ( 6) 00:10:37.700 18002.660 - 18107.939: 98.5139% ( 11) 00:10:37.700 18107.939 - 18213.218: 98.6042% ( 13) 00:10:37.700 18213.218 - 18318.496: 98.7014% ( 14) 00:10:37.700 18318.496 - 18423.775: 98.7917% ( 13) 00:10:37.700 18423.775 - 18529.054: 98.8819% ( 13) 00:10:37.700 18529.054 - 18634.333: 98.9375% ( 8) 00:10:37.700 18634.333 - 18739.611: 98.9792% ( 6) 00:10:37.700 18739.611 - 18844.890: 99.0278% ( 7) 00:10:37.700 18844.890 - 18950.169: 99.0694% ( 6) 00:10:37.700 18950.169 - 19055.447: 99.1111% ( 6) 00:10:37.700 32425.844 - 32636.402: 99.1250% ( 2) 00:10:37.700 32636.402 - 32846.959: 99.1736% ( 7) 00:10:37.700 32846.959 - 33057.516: 99.2222% ( 7) 00:10:37.700 33057.516 - 33268.074: 99.2708% ( 7) 00:10:37.700 33268.074 - 33478.631: 99.3194% ( 7) 00:10:37.700 33478.631 - 33689.189: 99.3681% ( 7) 00:10:37.700 33689.189 - 33899.746: 99.4167% ( 7) 00:10:37.700 33899.746 - 34110.304: 99.4653% ( 7) 00:10:37.700 34110.304 - 34320.861: 99.5069% ( 6) 00:10:37.700 34320.861 - 34531.418: 99.5556% ( 7) 00:10:37.700 38742.567 - 38953.124: 99.5903% ( 5) 00:10:37.700 38953.124 - 39163.682: 99.6389% ( 7) 00:10:37.700 39163.682 - 39374.239: 99.6667% ( 4) 00:10:37.700 39374.239 - 39584.797: 99.7083% ( 6) 00:10:37.700 39584.797 - 39795.354: 99.7500% ( 6) 00:10:37.700 39795.354 - 40005.912: 99.8056% ( 8) 00:10:37.700 40005.912 - 40216.469: 99.8472% ( 6) 00:10:37.700 40216.469 - 40427.027: 99.8958% ( 7) 00:10:37.700 40427.027 - 40637.584: 99.9375% ( 6) 00:10:37.700 40637.584 - 40848.141: 99.9931% ( 8) 00:10:37.700 40848.141 - 41058.699: 100.0000% ( 1) 00:10:37.700 00:10:37.700 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:37.700 ============================================================================== 00:10:37.700 Range in us Cumulative IO count 00:10:37.700 7685.346 - 7737.986: 0.0207% ( 3) 00:10:37.700 7737.986 - 7790.625: 0.1728% ( 22) 00:10:37.700 7790.625 - 7843.264: 0.7605% ( 85) 00:10:37.700 7843.264 - 7895.904: 1.7077% ( 137) 00:10:37.700 7895.904 - 7948.543: 3.0904% ( 200) 00:10:37.700 7948.543 - 8001.182: 4.9640% ( 271) 00:10:37.700 8001.182 - 8053.822: 7.8540% ( 418) 00:10:37.700 8053.822 - 8106.461: 11.6012% ( 542) 00:10:37.700 8106.461 - 8159.100: 15.5559% ( 572) 00:10:37.700 8159.100 - 8211.740: 20.0982% ( 657) 00:10:37.700 8211.740 - 8264.379: 24.8272% ( 684) 00:10:37.700 8264.379 - 8317.018: 29.7705% ( 715) 00:10:37.700 8317.018 - 8369.658: 34.8659% ( 737) 00:10:37.700 8369.658 - 8422.297: 40.3001% ( 786) 00:10:37.700 8422.297 - 8474.937: 45.6513% ( 774) 00:10:37.700 8474.937 - 8527.576: 51.1200% ( 791) 00:10:37.700 8527.576 - 8580.215: 56.5473% ( 785) 00:10:37.700 8580.215 - 8632.855: 62.0368% ( 794) 00:10:37.700 8632.855 - 8685.494: 67.3880% ( 774) 00:10:37.700 8685.494 - 8738.133: 72.3520% ( 718) 00:10:37.700 8738.133 - 8790.773: 76.9842% ( 670) 00:10:37.700 8790.773 - 8843.412: 81.3468% ( 631) 00:10:37.700 8843.412 - 8896.051: 85.0871% ( 541) 00:10:37.700 8896.051 - 8948.691: 88.0600% ( 430) 00:10:37.700 8948.691 - 9001.330: 90.1894% ( 308) 00:10:37.700 9001.330 - 9053.969: 91.9248% ( 251) 00:10:37.700 9053.969 - 9106.609: 93.1209% ( 173) 00:10:37.700 9106.609 - 9159.248: 94.1925% ( 155) 00:10:37.700 9159.248 - 9211.888: 95.1051% ( 132) 00:10:37.700 9211.888 - 9264.527: 95.8103% ( 102) 00:10:37.700 9264.527 - 9317.166: 96.2597% ( 65) 00:10:37.700 9317.166 - 9369.806: 96.5570% ( 43) 00:10:37.700 9369.806 - 9422.445: 96.7506% ( 28) 00:10:37.700 9422.445 - 9475.084: 96.9096% ( 23) 00:10:37.700 9475.084 - 9527.724: 96.9856% ( 11) 00:10:37.700 9527.724 - 9580.363: 97.0409% ( 8) 00:10:37.700 9580.363 - 9633.002: 97.0893% ( 7) 00:10:37.700 9633.002 - 9685.642: 97.1377% ( 7) 00:10:37.700 9685.642 - 9738.281: 97.1861% ( 7) 00:10:37.700 9738.281 - 9790.920: 97.2069% ( 3) 00:10:37.700 9790.920 - 9843.560: 97.2345% ( 4) 00:10:37.700 9843.560 - 9896.199: 97.2622% ( 4) 00:10:37.700 9896.199 - 9948.839: 97.2967% ( 5) 00:10:37.700 9948.839 - 10001.478: 97.3451% ( 7) 00:10:37.700 10001.478 - 10054.117: 97.3866% ( 6) 00:10:37.700 10054.117 - 10106.757: 97.4074% ( 3) 00:10:37.700 10106.757 - 10159.396: 97.4212% ( 2) 00:10:37.700 10159.396 - 10212.035: 97.4350% ( 2) 00:10:37.700 10212.035 - 10264.675: 97.4488% ( 2) 00:10:37.700 10264.675 - 10317.314: 97.4627% ( 2) 00:10:37.700 10317.314 - 10369.953: 97.4765% ( 2) 00:10:37.700 10369.953 - 10422.593: 97.4903% ( 2) 00:10:37.700 10422.593 - 10475.232: 97.5041% ( 2) 00:10:37.700 10475.232 - 10527.871: 97.5180% ( 2) 00:10:37.700 10527.871 - 10580.511: 97.5387% ( 3) 00:10:37.700 10580.511 - 10633.150: 97.5525% ( 2) 00:10:37.700 10633.150 - 10685.790: 97.5664% ( 2) 00:10:37.700 10685.790 - 10738.429: 97.5871% ( 3) 00:10:37.700 10738.429 - 10791.068: 97.6009% ( 2) 00:10:37.700 10791.068 - 10843.708: 97.6148% ( 2) 00:10:37.700 10843.708 - 10896.347: 97.6286% ( 2) 00:10:37.700 10896.347 - 10948.986: 97.6493% ( 3) 00:10:37.700 10948.986 - 11001.626: 97.6632% ( 2) 00:10:37.700 11001.626 - 11054.265: 97.6839% ( 3) 00:10:37.700 11054.265 - 11106.904: 97.6977% ( 2) 00:10:37.700 11106.904 - 11159.544: 97.7116% ( 2) 00:10:37.700 11159.544 - 11212.183: 97.7323% ( 3) 00:10:37.700 11212.183 - 11264.822: 97.7461% ( 2) 00:10:37.700 11264.822 - 11317.462: 97.7669% ( 3) 00:10:37.700 11317.462 - 11370.101: 97.7807% ( 2) 00:10:37.700 11370.101 - 11422.741: 97.7876% ( 1) 00:10:37.700 12001.773 - 12054.413: 97.8014% ( 2) 00:10:37.700 12054.413 - 12107.052: 97.8153% ( 2) 00:10:37.700 12107.052 - 12159.692: 97.8291% ( 2) 00:10:37.700 12159.692 - 12212.331: 97.8498% ( 3) 00:10:37.700 12212.331 - 12264.970: 97.8637% ( 2) 00:10:37.700 12264.970 - 12317.610: 97.8706% ( 1) 00:10:37.700 12317.610 - 12370.249: 97.8844% ( 2) 00:10:37.700 12370.249 - 12422.888: 97.8982% ( 2) 00:10:37.700 12422.888 - 12475.528: 97.9190% ( 3) 00:10:37.700 12475.528 - 12528.167: 97.9397% ( 3) 00:10:37.700 12528.167 - 12580.806: 97.9605% ( 3) 00:10:37.700 12580.806 - 12633.446: 97.9743% ( 2) 00:10:37.700 12633.446 - 12686.085: 97.9881% ( 2) 00:10:37.700 12686.085 - 12738.724: 98.0019% ( 2) 00:10:37.700 12738.724 - 12791.364: 98.0158% ( 2) 00:10:37.700 12791.364 - 12844.003: 98.0365% ( 3) 00:10:37.700 12844.003 - 12896.643: 98.0503% ( 2) 00:10:37.700 12896.643 - 12949.282: 98.0642% ( 2) 00:10:37.700 12949.282 - 13001.921: 98.0780% ( 2) 00:10:37.700 13001.921 - 13054.561: 98.0918% ( 2) 00:10:37.700 13054.561 - 13107.200: 98.1126% ( 3) 00:10:37.700 13107.200 - 13159.839: 98.1264% ( 2) 00:10:37.700 13159.839 - 13212.479: 98.1471% ( 3) 00:10:37.700 13212.479 - 13265.118: 98.1610% ( 2) 00:10:37.700 13265.118 - 13317.757: 98.1679% ( 1) 00:10:37.700 13317.757 - 13370.397: 98.1886% ( 3) 00:10:37.700 13370.397 - 13423.036: 98.2024% ( 2) 00:10:37.700 13423.036 - 13475.676: 98.2163% ( 2) 00:10:37.700 13475.676 - 13580.954: 98.2301% ( 2) 00:10:37.700 17581.545 - 17686.824: 98.2785% ( 7) 00:10:37.700 17686.824 - 17792.103: 98.3269% ( 7) 00:10:37.700 17792.103 - 17897.382: 98.3753% ( 7) 00:10:37.700 17897.382 - 18002.660: 98.4237% ( 7) 00:10:37.700 18002.660 - 18107.939: 98.4859% ( 9) 00:10:37.700 18107.939 - 18213.218: 98.5619% ( 11) 00:10:37.700 18213.218 - 18318.496: 98.6795% ( 17) 00:10:37.700 18318.496 - 18423.775: 98.7555% ( 11) 00:10:37.700 18423.775 - 18529.054: 98.8523% ( 14) 00:10:37.700 18529.054 - 18634.333: 98.9076% ( 8) 00:10:37.700 18634.333 - 18739.611: 98.9560% ( 7) 00:10:37.700 18739.611 - 18844.890: 99.0044% ( 7) 00:10:37.700 18844.890 - 18950.169: 99.0528% ( 7) 00:10:37.700 18950.169 - 19055.447: 99.1012% ( 7) 00:10:37.700 19055.447 - 19160.726: 99.1150% ( 2) 00:10:37.700 25372.170 - 25477.449: 99.1289% ( 2) 00:10:37.700 25477.449 - 25582.728: 99.1496% ( 3) 00:10:37.700 25582.728 - 25688.006: 99.1704% ( 3) 00:10:37.700 25688.006 - 25793.285: 99.1980% ( 4) 00:10:37.700 25793.285 - 25898.564: 99.2257% ( 4) 00:10:37.700 25898.564 - 26003.843: 99.2464% ( 3) 00:10:37.700 26003.843 - 26109.121: 99.2671% ( 3) 00:10:37.700 26109.121 - 26214.400: 99.2948% ( 4) 00:10:37.700 26214.400 - 26319.679: 99.3155% ( 3) 00:10:37.700 26319.679 - 26424.957: 99.3432% ( 4) 00:10:37.700 26424.957 - 26530.236: 99.3639% ( 3) 00:10:37.700 26530.236 - 26635.515: 99.3916% ( 4) 00:10:37.700 26635.515 - 26740.794: 99.4192% ( 4) 00:10:37.700 26740.794 - 26846.072: 99.4469% ( 4) 00:10:37.700 26846.072 - 26951.351: 99.4676% ( 3) 00:10:37.700 26951.351 - 27161.908: 99.5160% ( 7) 00:10:37.700 27161.908 - 27372.466: 99.5575% ( 6) 00:10:37.700 31794.172 - 32004.729: 99.5783% ( 3) 00:10:37.700 32004.729 - 32215.287: 99.6267% ( 7) 00:10:37.700 32215.287 - 32425.844: 99.6751% ( 7) 00:10:37.700 32425.844 - 32636.402: 99.7235% ( 7) 00:10:37.700 32636.402 - 32846.959: 99.7718% ( 7) 00:10:37.700 32846.959 - 33057.516: 99.8202% ( 7) 00:10:37.700 33057.516 - 33268.074: 99.8686% ( 7) 00:10:37.700 33268.074 - 33478.631: 99.9170% ( 7) 00:10:37.700 33478.631 - 33689.189: 99.9654% ( 7) 00:10:37.700 33689.189 - 33899.746: 100.0000% ( 5) 00:10:37.700 00:10:37.700 14:16:02 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:39.083 Initializing NVMe Controllers 00:10:39.083 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:39.083 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:39.083 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:39.083 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:39.083 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:39.083 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:39.083 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:39.083 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:39.083 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:39.083 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:39.083 Initialization complete. Launching workers. 00:10:39.083 ======================================================== 00:10:39.083 Latency(us) 00:10:39.083 Device Information : IOPS MiB/s Average min max 00:10:39.083 PCIE (0000:00:10.0) NSID 1 from core 0: 11667.98 136.73 10998.04 8308.90 44555.12 00:10:39.083 PCIE (0000:00:11.0) NSID 1 from core 0: 11667.98 136.73 10980.22 8485.49 42466.17 00:10:39.083 PCIE (0000:00:13.0) NSID 1 from core 0: 11667.98 136.73 10962.43 8390.25 41381.02 00:10:39.083 PCIE (0000:00:12.0) NSID 1 from core 0: 11667.98 136.73 10944.34 8240.66 39245.32 00:10:39.083 PCIE (0000:00:12.0) NSID 2 from core 0: 11667.98 136.73 10926.27 8289.72 37520.49 00:10:39.083 PCIE (0000:00:12.0) NSID 3 from core 0: 11731.74 137.48 10848.12 8373.84 29428.04 00:10:39.083 ======================================================== 00:10:39.083 Total : 70071.62 821.15 10943.15 8240.66 44555.12 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8738.133us 00:10:39.083 10.00000% : 9159.248us 00:10:39.083 25.00000% : 9422.445us 00:10:39.083 50.00000% : 9843.560us 00:10:39.083 75.00000% : 11001.626us 00:10:39.083 90.00000% : 14633.741us 00:10:39.083 95.00000% : 16002.365us 00:10:39.083 98.00000% : 18318.496us 00:10:39.083 99.00000% : 34320.861us 00:10:39.083 99.50000% : 42743.158us 00:10:39.083 99.90000% : 44217.060us 00:10:39.083 99.99000% : 44638.175us 00:10:39.083 99.99900% : 44638.175us 00:10:39.083 99.99990% : 44638.175us 00:10:39.083 99.99999% : 44638.175us 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8843.412us 00:10:39.083 10.00000% : 9211.888us 00:10:39.083 25.00000% : 9475.084us 00:10:39.083 50.00000% : 9790.920us 00:10:39.083 75.00000% : 10896.347us 00:10:39.083 90.00000% : 14528.463us 00:10:39.083 95.00000% : 16212.922us 00:10:39.083 98.00000% : 18529.054us 00:10:39.083 99.00000% : 32636.402us 00:10:39.083 99.50000% : 40848.141us 00:10:39.083 99.90000% : 42322.043us 00:10:39.083 99.99000% : 42532.601us 00:10:39.083 99.99900% : 42532.601us 00:10:39.083 99.99990% : 42532.601us 00:10:39.083 99.99999% : 42532.601us 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8790.773us 00:10:39.083 10.00000% : 9159.248us 00:10:39.083 25.00000% : 9475.084us 00:10:39.083 50.00000% : 9843.560us 00:10:39.083 75.00000% : 10948.986us 00:10:39.083 90.00000% : 14107.348us 00:10:39.083 95.00000% : 16634.037us 00:10:39.083 98.00000% : 18002.660us 00:10:39.083 99.00000% : 32004.729us 00:10:39.083 99.50000% : 39795.354us 00:10:39.083 99.90000% : 41269.256us 00:10:39.083 99.99000% : 41479.814us 00:10:39.083 99.99900% : 41479.814us 00:10:39.083 99.99990% : 41479.814us 00:10:39.083 99.99999% : 41479.814us 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8790.773us 00:10:39.083 10.00000% : 9211.888us 00:10:39.083 25.00000% : 9475.084us 00:10:39.083 50.00000% : 9843.560us 00:10:39.083 75.00000% : 11001.626us 00:10:39.083 90.00000% : 14212.627us 00:10:39.083 95.00000% : 16528.758us 00:10:39.083 98.00000% : 17581.545us 00:10:39.083 99.00000% : 30109.712us 00:10:39.083 99.50000% : 37689.780us 00:10:39.083 99.90000% : 38953.124us 00:10:39.083 99.99000% : 39374.239us 00:10:39.083 99.99900% : 39374.239us 00:10:39.083 99.99990% : 39374.239us 00:10:39.083 99.99999% : 39374.239us 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8738.133us 00:10:39.083 10.00000% : 9211.888us 00:10:39.083 25.00000% : 9475.084us 00:10:39.083 50.00000% : 9843.560us 00:10:39.083 75.00000% : 11001.626us 00:10:39.083 90.00000% : 14423.184us 00:10:39.083 95.00000% : 16318.201us 00:10:39.083 98.00000% : 17792.103us 00:10:39.083 99.00000% : 28425.253us 00:10:39.083 99.50000% : 36005.320us 00:10:39.083 99.90000% : 37268.665us 00:10:39.083 99.99000% : 37689.780us 00:10:39.083 99.99900% : 37689.780us 00:10:39.083 99.99990% : 37689.780us 00:10:39.083 99.99999% : 37689.780us 00:10:39.083 00:10:39.083 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:39.083 ================================================================================= 00:10:39.083 1.00000% : 8790.773us 00:10:39.083 10.00000% : 9211.888us 00:10:39.083 25.00000% : 9475.084us 00:10:39.083 50.00000% : 9843.560us 00:10:39.083 75.00000% : 11106.904us 00:10:39.083 90.00000% : 14423.184us 00:10:39.083 95.00000% : 15791.807us 00:10:39.083 98.00000% : 18107.939us 00:10:39.083 99.00000% : 19897.677us 00:10:39.083 99.50000% : 27793.581us 00:10:39.083 99.90000% : 29267.483us 00:10:39.083 99.99000% : 29478.040us 00:10:39.083 99.99900% : 29478.040us 00:10:39.083 99.99990% : 29478.040us 00:10:39.083 99.99999% : 29478.040us 00:10:39.083 00:10:39.083 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:39.083 ============================================================================== 00:10:39.083 Range in us Cumulative IO count 00:10:39.083 8264.379 - 8317.018: 0.0171% ( 2) 00:10:39.083 8317.018 - 8369.658: 0.1622% ( 17) 00:10:39.083 8369.658 - 8422.297: 0.3842% ( 26) 00:10:39.083 8422.297 - 8474.937: 0.5208% ( 16) 00:10:39.083 8474.937 - 8527.576: 0.5891% ( 8) 00:10:39.083 8527.576 - 8580.215: 0.6831% ( 11) 00:10:39.083 8580.215 - 8632.855: 0.7599% ( 9) 00:10:39.083 8632.855 - 8685.494: 0.9648% ( 24) 00:10:39.083 8685.494 - 8738.133: 1.1868% ( 26) 00:10:39.083 8738.133 - 8790.773: 1.6223% ( 51) 00:10:39.083 8790.773 - 8843.412: 2.1260% ( 59) 00:10:39.083 8843.412 - 8896.051: 2.8262% ( 82) 00:10:39.083 8896.051 - 8948.691: 3.9447% ( 131) 00:10:39.083 8948.691 - 9001.330: 4.9863% ( 122) 00:10:39.083 9001.330 - 9053.969: 6.3695% ( 162) 00:10:39.083 9053.969 - 9106.609: 8.2906% ( 225) 00:10:39.083 9106.609 - 9159.248: 10.7923% ( 293) 00:10:39.083 9159.248 - 9211.888: 13.0550% ( 265) 00:10:39.083 9211.888 - 9264.527: 15.1981% ( 251) 00:10:39.083 9264.527 - 9317.166: 18.5963% ( 398) 00:10:39.083 9317.166 - 9369.806: 22.4129% ( 447) 00:10:39.083 9369.806 - 9422.445: 25.1366% ( 319) 00:10:39.083 9422.445 - 9475.084: 28.8337% ( 433) 00:10:39.083 9475.084 - 9527.724: 32.8637% ( 472) 00:10:39.083 9527.724 - 9580.363: 36.9621% ( 480) 00:10:39.083 9580.363 - 9633.002: 40.7787% ( 447) 00:10:39.083 9633.002 - 9685.642: 43.5707% ( 327) 00:10:39.083 9685.642 - 9738.281: 46.0297% ( 288) 00:10:39.083 9738.281 - 9790.920: 48.5400% ( 294) 00:10:39.083 9790.920 - 9843.560: 50.7428% ( 258) 00:10:39.083 9843.560 - 9896.199: 53.0396% ( 269) 00:10:39.083 9896.199 - 9948.839: 54.9607% ( 225) 00:10:39.083 9948.839 - 10001.478: 56.7538% ( 210) 00:10:39.083 10001.478 - 10054.117: 58.3931% ( 192) 00:10:39.083 10054.117 - 10106.757: 59.5714% ( 138) 00:10:39.083 10106.757 - 10159.396: 60.7582% ( 139) 00:10:39.083 10159.396 - 10212.035: 61.9194% ( 136) 00:10:39.083 10212.035 - 10264.675: 62.9696% ( 123) 00:10:39.083 10264.675 - 10317.314: 64.0198% ( 123) 00:10:39.083 10317.314 - 10369.953: 64.9590% ( 110) 00:10:39.083 10369.953 - 10422.593: 65.8982% ( 110) 00:10:39.083 10422.593 - 10475.232: 66.9826% ( 127) 00:10:39.083 10475.232 - 10527.871: 68.0328% ( 123) 00:10:39.083 10527.871 - 10580.511: 68.9549% ( 108) 00:10:39.083 10580.511 - 10633.150: 70.3040% ( 158) 00:10:39.083 10633.150 - 10685.790: 71.3883% ( 127) 00:10:39.083 10685.790 - 10738.429: 72.4129% ( 120) 00:10:39.083 10738.429 - 10791.068: 73.0960% ( 80) 00:10:39.083 10791.068 - 10843.708: 73.5827% ( 57) 00:10:39.083 10843.708 - 10896.347: 74.0693% ( 57) 00:10:39.083 10896.347 - 10948.986: 74.6243% ( 65) 00:10:39.083 10948.986 - 11001.626: 75.0427% ( 49) 00:10:39.083 11001.626 - 11054.265: 75.6574% ( 72) 00:10:39.083 11054.265 - 11106.904: 76.1270% ( 55) 00:10:39.083 11106.904 - 11159.544: 76.7845% ( 77) 00:10:39.083 11159.544 - 11212.183: 77.2456% ( 54) 00:10:39.084 11212.183 - 11264.822: 77.6981% ( 53) 00:10:39.084 11264.822 - 11317.462: 77.9372% ( 28) 00:10:39.084 11317.462 - 11370.101: 78.2104% ( 32) 00:10:39.084 11370.101 - 11422.741: 78.4921% ( 33) 00:10:39.084 11422.741 - 11475.380: 78.7398% ( 29) 00:10:39.084 11475.380 - 11528.019: 78.9959% ( 30) 00:10:39.084 11528.019 - 11580.659: 79.3033% ( 36) 00:10:39.084 11580.659 - 11633.298: 79.6533% ( 41) 00:10:39.084 11633.298 - 11685.937: 79.9095% ( 30) 00:10:39.084 11685.937 - 11738.577: 80.1144% ( 24) 00:10:39.084 11738.577 - 11791.216: 80.3193% ( 24) 00:10:39.084 11791.216 - 11843.855: 80.4901% ( 20) 00:10:39.084 11843.855 - 11896.495: 80.6609% ( 20) 00:10:39.084 11896.495 - 11949.134: 80.8572% ( 23) 00:10:39.084 11949.134 - 12001.773: 81.1390% ( 33) 00:10:39.084 12001.773 - 12054.413: 81.4378% ( 35) 00:10:39.084 12054.413 - 12107.052: 81.7281% ( 34) 00:10:39.084 12107.052 - 12159.692: 82.0184% ( 34) 00:10:39.084 12159.692 - 12212.331: 82.2746% ( 30) 00:10:39.084 12212.331 - 12264.970: 82.5905% ( 37) 00:10:39.084 12264.970 - 12317.610: 82.7698% ( 21) 00:10:39.084 12317.610 - 12370.249: 83.0003% ( 27) 00:10:39.084 12370.249 - 12422.888: 83.2394% ( 28) 00:10:39.084 12422.888 - 12475.528: 83.3931% ( 18) 00:10:39.084 12475.528 - 12528.167: 83.5639% ( 20) 00:10:39.084 12528.167 - 12580.806: 83.7005% ( 16) 00:10:39.084 12580.806 - 12633.446: 83.7602% ( 7) 00:10:39.084 12633.446 - 12686.085: 84.0335% ( 32) 00:10:39.084 12686.085 - 12738.724: 84.4775% ( 52) 00:10:39.084 12738.724 - 12791.364: 84.7251% ( 29) 00:10:39.084 12791.364 - 12844.003: 84.9044% ( 21) 00:10:39.084 12844.003 - 12896.643: 85.1434% ( 28) 00:10:39.084 12896.643 - 12949.282: 85.3484% ( 24) 00:10:39.084 12949.282 - 13001.921: 85.5874% ( 28) 00:10:39.084 13001.921 - 13054.561: 85.8009% ( 25) 00:10:39.084 13054.561 - 13107.200: 86.0400% ( 28) 00:10:39.084 13107.200 - 13159.839: 86.2278% ( 22) 00:10:39.084 13159.839 - 13212.479: 86.4413% ( 25) 00:10:39.084 13212.479 - 13265.118: 86.6889% ( 29) 00:10:39.084 13265.118 - 13317.757: 86.8767% ( 22) 00:10:39.084 13317.757 - 13370.397: 87.0389% ( 19) 00:10:39.084 13370.397 - 13423.036: 87.1243% ( 10) 00:10:39.084 13423.036 - 13475.676: 87.3122% ( 22) 00:10:39.084 13475.676 - 13580.954: 87.4829% ( 20) 00:10:39.084 13580.954 - 13686.233: 87.6537% ( 20) 00:10:39.084 13686.233 - 13791.512: 87.8330% ( 21) 00:10:39.084 13791.512 - 13896.790: 88.1489% ( 37) 00:10:39.084 13896.790 - 14002.069: 88.3367% ( 22) 00:10:39.084 14002.069 - 14107.348: 88.5929% ( 30) 00:10:39.084 14107.348 - 14212.627: 88.8576% ( 31) 00:10:39.084 14212.627 - 14317.905: 89.1052% ( 29) 00:10:39.084 14317.905 - 14423.184: 89.4638% ( 42) 00:10:39.084 14423.184 - 14528.463: 89.8822% ( 49) 00:10:39.084 14528.463 - 14633.741: 90.5652% ( 80) 00:10:39.084 14633.741 - 14739.020: 91.0519% ( 57) 00:10:39.084 14739.020 - 14844.299: 91.4532% ( 47) 00:10:39.084 14844.299 - 14949.578: 91.8630% ( 48) 00:10:39.084 14949.578 - 15054.856: 92.2302% ( 43) 00:10:39.084 15054.856 - 15160.135: 92.6742% ( 52) 00:10:39.084 15160.135 - 15265.414: 93.1950% ( 61) 00:10:39.084 15265.414 - 15370.692: 93.6561% ( 54) 00:10:39.084 15370.692 - 15475.971: 94.0830% ( 50) 00:10:39.084 15475.971 - 15581.250: 94.3904% ( 36) 00:10:39.084 15581.250 - 15686.529: 94.6124% ( 26) 00:10:39.084 15686.529 - 15791.807: 94.8087% ( 23) 00:10:39.084 15791.807 - 15897.086: 94.9539% ( 17) 00:10:39.084 15897.086 - 16002.365: 95.1161% ( 19) 00:10:39.084 16002.365 - 16107.643: 95.2783% ( 19) 00:10:39.084 16107.643 - 16212.922: 95.3979% ( 14) 00:10:39.084 16212.922 - 16318.201: 95.4662% ( 8) 00:10:39.084 16318.201 - 16423.480: 95.5003% ( 4) 00:10:39.084 16423.480 - 16528.758: 95.5260% ( 3) 00:10:39.084 16528.758 - 16634.037: 95.5601% ( 4) 00:10:39.084 16634.037 - 16739.316: 95.5857% ( 3) 00:10:39.084 16739.316 - 16844.594: 95.6455% ( 7) 00:10:39.084 16844.594 - 16949.873: 95.7736% ( 15) 00:10:39.084 16949.873 - 17055.152: 95.8419% ( 8) 00:10:39.084 17055.152 - 17160.431: 95.9273% ( 10) 00:10:39.084 17160.431 - 17265.709: 96.0553% ( 15) 00:10:39.084 17265.709 - 17370.988: 96.1749% ( 14) 00:10:39.084 17370.988 - 17476.267: 96.5335% ( 42) 00:10:39.084 17476.267 - 17581.545: 96.8152% ( 33) 00:10:39.084 17581.545 - 17686.824: 97.0714% ( 30) 00:10:39.084 17686.824 - 17792.103: 97.3275% ( 30) 00:10:39.084 17792.103 - 17897.382: 97.4812% ( 18) 00:10:39.084 17897.382 - 18002.660: 97.6605% ( 21) 00:10:39.084 18002.660 - 18107.939: 97.8142% ( 18) 00:10:39.084 18107.939 - 18213.218: 97.9764% ( 19) 00:10:39.084 18213.218 - 18318.496: 98.0874% ( 13) 00:10:39.084 18318.496 - 18423.775: 98.1472% ( 7) 00:10:39.084 18423.775 - 18529.054: 98.1728% ( 3) 00:10:39.084 18529.054 - 18634.333: 98.2070% ( 4) 00:10:39.084 18634.333 - 18739.611: 98.2923% ( 10) 00:10:39.084 18739.611 - 18844.890: 98.3607% ( 8) 00:10:39.084 18844.890 - 18950.169: 98.4546% ( 11) 00:10:39.084 18950.169 - 19055.447: 98.5058% ( 6) 00:10:39.084 19055.447 - 19160.726: 98.6168% ( 13) 00:10:39.084 19160.726 - 19266.005: 98.7107% ( 11) 00:10:39.084 19266.005 - 19371.284: 98.7876% ( 9) 00:10:39.084 19371.284 - 19476.562: 98.8217% ( 4) 00:10:39.084 19897.677 - 20002.956: 98.8388% ( 2) 00:10:39.084 20002.956 - 20108.235: 98.8815% ( 5) 00:10:39.084 20108.235 - 20213.513: 98.8986% ( 2) 00:10:39.084 20213.513 - 20318.792: 98.9071% ( 1) 00:10:39.084 33899.746 - 34110.304: 98.9413% ( 4) 00:10:39.084 34110.304 - 34320.861: 99.0010% ( 7) 00:10:39.084 34320.861 - 34531.418: 99.0693% ( 8) 00:10:39.084 34531.418 - 34741.976: 99.1291% ( 7) 00:10:39.084 34741.976 - 34952.533: 99.1803% ( 6) 00:10:39.084 34952.533 - 35163.091: 99.2401% ( 7) 00:10:39.084 35163.091 - 35373.648: 99.2913% ( 6) 00:10:39.084 35373.648 - 35584.206: 99.3596% ( 8) 00:10:39.084 35584.206 - 35794.763: 99.4109% ( 6) 00:10:39.084 35794.763 - 36005.320: 99.4536% ( 5) 00:10:39.084 42322.043 - 42532.601: 99.4706% ( 2) 00:10:39.084 42532.601 - 42743.158: 99.5219% ( 6) 00:10:39.084 42743.158 - 42953.716: 99.5816% ( 7) 00:10:39.084 42953.716 - 43164.273: 99.6329% ( 6) 00:10:39.084 43164.273 - 43374.831: 99.6926% ( 7) 00:10:39.084 43374.831 - 43585.388: 99.7524% ( 7) 00:10:39.084 43585.388 - 43795.945: 99.8122% ( 7) 00:10:39.084 43795.945 - 44006.503: 99.8548% ( 5) 00:10:39.084 44006.503 - 44217.060: 99.9146% ( 7) 00:10:39.084 44217.060 - 44427.618: 99.9744% ( 7) 00:10:39.084 44427.618 - 44638.175: 100.0000% ( 3) 00:10:39.084 00:10:39.084 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:39.084 ============================================================================== 00:10:39.084 Range in us Cumulative IO count 00:10:39.084 8474.937 - 8527.576: 0.0085% ( 1) 00:10:39.084 8527.576 - 8580.215: 0.0683% ( 7) 00:10:39.084 8580.215 - 8632.855: 0.1622% ( 11) 00:10:39.084 8632.855 - 8685.494: 0.2818% ( 14) 00:10:39.084 8685.494 - 8738.133: 0.4440% ( 19) 00:10:39.084 8738.133 - 8790.773: 0.6916% ( 29) 00:10:39.084 8790.773 - 8843.412: 1.0502% ( 42) 00:10:39.084 8843.412 - 8896.051: 1.4515% ( 47) 00:10:39.084 8896.051 - 8948.691: 2.2883% ( 98) 00:10:39.084 8948.691 - 9001.330: 3.4921% ( 141) 00:10:39.084 9001.330 - 9053.969: 4.9863% ( 175) 00:10:39.084 9053.969 - 9106.609: 7.6332% ( 310) 00:10:39.084 9106.609 - 9159.248: 9.8702% ( 262) 00:10:39.084 9159.248 - 9211.888: 12.4744% ( 305) 00:10:39.084 9211.888 - 9264.527: 14.6687% ( 257) 00:10:39.084 9264.527 - 9317.166: 17.4863% ( 330) 00:10:39.084 9317.166 - 9369.806: 20.5772% ( 362) 00:10:39.084 9369.806 - 9422.445: 24.2145% ( 426) 00:10:39.084 9422.445 - 9475.084: 28.2702% ( 475) 00:10:39.084 9475.084 - 9527.724: 32.3087% ( 473) 00:10:39.084 9527.724 - 9580.363: 36.4498% ( 485) 00:10:39.084 9580.363 - 9633.002: 40.7104% ( 499) 00:10:39.084 9633.002 - 9685.642: 44.2538% ( 415) 00:10:39.084 9685.642 - 9738.281: 47.4898% ( 379) 00:10:39.084 9738.281 - 9790.920: 50.5806% ( 362) 00:10:39.084 9790.920 - 9843.560: 53.1335% ( 299) 00:10:39.084 9843.560 - 9896.199: 55.2681% ( 250) 00:10:39.084 9896.199 - 9948.839: 57.4539% ( 256) 00:10:39.084 9948.839 - 10001.478: 59.1359% ( 197) 00:10:39.084 10001.478 - 10054.117: 60.5533% ( 166) 00:10:39.084 10054.117 - 10106.757: 61.4839% ( 109) 00:10:39.084 10106.757 - 10159.396: 62.2268% ( 87) 00:10:39.084 10159.396 - 10212.035: 63.0977% ( 102) 00:10:39.084 10212.035 - 10264.675: 63.8747% ( 91) 00:10:39.084 10264.675 - 10317.314: 64.6431% ( 90) 00:10:39.084 10317.314 - 10369.953: 65.7104% ( 125) 00:10:39.084 10369.953 - 10422.593: 66.6496% ( 110) 00:10:39.084 10422.593 - 10475.232: 67.5717% ( 108) 00:10:39.084 10475.232 - 10527.871: 68.5792% ( 118) 00:10:39.084 10527.871 - 10580.511: 69.7661% ( 139) 00:10:39.084 10580.511 - 10633.150: 70.7992% ( 121) 00:10:39.084 10633.150 - 10685.790: 71.8238% ( 120) 00:10:39.084 10685.790 - 10738.429: 72.8740% ( 123) 00:10:39.084 10738.429 - 10791.068: 73.8473% ( 114) 00:10:39.084 10791.068 - 10843.708: 74.5048% ( 77) 00:10:39.084 10843.708 - 10896.347: 75.1452% ( 75) 00:10:39.084 10896.347 - 10948.986: 75.6318% ( 57) 00:10:39.084 10948.986 - 11001.626: 76.0844% ( 53) 00:10:39.084 11001.626 - 11054.265: 76.4259% ( 40) 00:10:39.084 11054.265 - 11106.904: 76.8443% ( 49) 00:10:39.084 11106.904 - 11159.544: 77.2883% ( 52) 00:10:39.084 11159.544 - 11212.183: 77.6554% ( 43) 00:10:39.084 11212.183 - 11264.822: 77.9884% ( 39) 00:10:39.084 11264.822 - 11317.462: 78.3555% ( 43) 00:10:39.084 11317.462 - 11370.101: 78.6885% ( 39) 00:10:39.084 11370.101 - 11422.741: 79.0301% ( 40) 00:10:39.084 11422.741 - 11475.380: 79.4228% ( 46) 00:10:39.084 11475.380 - 11528.019: 79.7900% ( 43) 00:10:39.084 11528.019 - 11580.659: 80.1656% ( 44) 00:10:39.084 11580.659 - 11633.298: 80.4986% ( 39) 00:10:39.084 11633.298 - 11685.937: 80.8402% ( 40) 00:10:39.084 11685.937 - 11738.577: 81.0878% ( 29) 00:10:39.084 11738.577 - 11791.216: 81.2842% ( 23) 00:10:39.084 11791.216 - 11843.855: 81.4891% ( 24) 00:10:39.084 11843.855 - 11896.495: 81.6598% ( 20) 00:10:39.084 11896.495 - 11949.134: 81.8135% ( 18) 00:10:39.085 11949.134 - 12001.773: 81.9843% ( 20) 00:10:39.085 12001.773 - 12054.413: 82.1551% ( 20) 00:10:39.085 12054.413 - 12107.052: 82.4710% ( 37) 00:10:39.085 12107.052 - 12159.692: 82.6930% ( 26) 00:10:39.085 12159.692 - 12212.331: 82.8467% ( 18) 00:10:39.085 12212.331 - 12264.970: 82.9235% ( 9) 00:10:39.085 12264.970 - 12317.610: 83.0174% ( 11) 00:10:39.085 12317.610 - 12370.249: 83.1113% ( 11) 00:10:39.085 12370.249 - 12422.888: 83.2053% ( 11) 00:10:39.085 12422.888 - 12475.528: 83.4358% ( 27) 00:10:39.085 12475.528 - 12528.167: 83.6492% ( 25) 00:10:39.085 12528.167 - 12580.806: 83.8542% ( 24) 00:10:39.085 12580.806 - 12633.446: 84.0762% ( 26) 00:10:39.085 12633.446 - 12686.085: 84.2128% ( 16) 00:10:39.085 12686.085 - 12738.724: 84.3323% ( 14) 00:10:39.085 12738.724 - 12791.364: 84.4860% ( 18) 00:10:39.085 12791.364 - 12844.003: 84.6482% ( 19) 00:10:39.085 12844.003 - 12896.643: 84.8446% ( 23) 00:10:39.085 12896.643 - 12949.282: 85.1605% ( 37) 00:10:39.085 12949.282 - 13001.921: 85.5533% ( 46) 00:10:39.085 13001.921 - 13054.561: 85.9887% ( 51) 00:10:39.085 13054.561 - 13107.200: 86.3388% ( 41) 00:10:39.085 13107.200 - 13159.839: 86.5523% ( 25) 00:10:39.085 13159.839 - 13212.479: 86.7572% ( 24) 00:10:39.085 13212.479 - 13265.118: 86.9962% ( 28) 00:10:39.085 13265.118 - 13317.757: 87.1926% ( 23) 00:10:39.085 13317.757 - 13370.397: 87.3378% ( 17) 00:10:39.085 13370.397 - 13423.036: 87.5171% ( 21) 00:10:39.085 13423.036 - 13475.676: 87.6366% ( 14) 00:10:39.085 13475.676 - 13580.954: 87.9098% ( 32) 00:10:39.085 13580.954 - 13686.233: 88.0977% ( 22) 00:10:39.085 13686.233 - 13791.512: 88.2428% ( 17) 00:10:39.085 13791.512 - 13896.790: 88.4734% ( 27) 00:10:39.085 13896.790 - 14002.069: 88.7551% ( 33) 00:10:39.085 14002.069 - 14107.348: 89.0881% ( 39) 00:10:39.085 14107.348 - 14212.627: 89.4126% ( 38) 00:10:39.085 14212.627 - 14317.905: 89.6858% ( 32) 00:10:39.085 14317.905 - 14423.184: 89.8480% ( 19) 00:10:39.085 14423.184 - 14528.463: 90.0188% ( 20) 00:10:39.085 14528.463 - 14633.741: 90.2493% ( 27) 00:10:39.085 14633.741 - 14739.020: 90.4457% ( 23) 00:10:39.085 14739.020 - 14844.299: 90.6933% ( 29) 00:10:39.085 14844.299 - 14949.578: 91.0348% ( 40) 00:10:39.085 14949.578 - 15054.856: 91.3593% ( 38) 00:10:39.085 15054.856 - 15160.135: 91.6496% ( 34) 00:10:39.085 15160.135 - 15265.414: 92.0338% ( 45) 00:10:39.085 15265.414 - 15370.692: 92.5205% ( 57) 00:10:39.085 15370.692 - 15475.971: 92.8962% ( 44) 00:10:39.085 15475.971 - 15581.250: 93.2719% ( 44) 00:10:39.085 15581.250 - 15686.529: 93.5536% ( 33) 00:10:39.085 15686.529 - 15791.807: 93.8098% ( 30) 00:10:39.085 15791.807 - 15897.086: 94.1001% ( 34) 00:10:39.085 15897.086 - 16002.365: 94.4416% ( 40) 00:10:39.085 16002.365 - 16107.643: 94.8685% ( 50) 00:10:39.085 16107.643 - 16212.922: 95.1759% ( 36) 00:10:39.085 16212.922 - 16318.201: 95.3979% ( 26) 00:10:39.085 16318.201 - 16423.480: 95.5089% ( 13) 00:10:39.085 16423.480 - 16528.758: 95.5857% ( 9) 00:10:39.085 16528.758 - 16634.037: 95.6199% ( 4) 00:10:39.085 16634.037 - 16739.316: 95.6284% ( 1) 00:10:39.085 16844.594 - 16949.873: 95.6626% ( 4) 00:10:39.085 16949.873 - 17055.152: 95.7821% ( 14) 00:10:39.085 17055.152 - 17160.431: 95.8333% ( 6) 00:10:39.085 17160.431 - 17265.709: 95.8504% ( 2) 00:10:39.085 17265.709 - 17370.988: 95.8760% ( 3) 00:10:39.085 17370.988 - 17476.267: 96.0724% ( 23) 00:10:39.085 17476.267 - 17581.545: 96.3029% ( 27) 00:10:39.085 17581.545 - 17686.824: 96.4310% ( 15) 00:10:39.085 17686.824 - 17792.103: 96.5932% ( 19) 00:10:39.085 17792.103 - 17897.382: 96.7555% ( 19) 00:10:39.085 17897.382 - 18002.660: 97.0799% ( 38) 00:10:39.085 18002.660 - 18107.939: 97.3105% ( 27) 00:10:39.085 18107.939 - 18213.218: 97.5239% ( 25) 00:10:39.085 18213.218 - 18318.496: 97.7203% ( 23) 00:10:39.085 18318.496 - 18423.775: 97.8825% ( 19) 00:10:39.085 18423.775 - 18529.054: 98.0106% ( 15) 00:10:39.085 18529.054 - 18634.333: 98.1045% ( 11) 00:10:39.085 18634.333 - 18739.611: 98.1984% ( 11) 00:10:39.085 18739.611 - 18844.890: 98.2497% ( 6) 00:10:39.085 18844.890 - 18950.169: 98.2838% ( 4) 00:10:39.085 18950.169 - 19055.447: 98.3607% ( 9) 00:10:39.085 19055.447 - 19160.726: 98.4460% ( 10) 00:10:39.085 19160.726 - 19266.005: 98.5229% ( 9) 00:10:39.085 19266.005 - 19371.284: 98.5485% ( 3) 00:10:39.085 19371.284 - 19476.562: 98.5741% ( 3) 00:10:39.085 19476.562 - 19581.841: 98.5912% ( 2) 00:10:39.085 19581.841 - 19687.120: 98.6253% ( 4) 00:10:39.085 19687.120 - 19792.398: 98.6595% ( 4) 00:10:39.085 19792.398 - 19897.677: 98.7022% ( 5) 00:10:39.085 19897.677 - 20002.956: 98.7363% ( 4) 00:10:39.085 20002.956 - 20108.235: 98.7705% ( 4) 00:10:39.085 20108.235 - 20213.513: 98.8046% ( 4) 00:10:39.085 20213.513 - 20318.792: 98.8388% ( 4) 00:10:39.085 20318.792 - 20424.071: 98.8815% ( 5) 00:10:39.085 20424.071 - 20529.349: 98.9071% ( 3) 00:10:39.085 32215.287 - 32425.844: 98.9327% ( 3) 00:10:39.085 32425.844 - 32636.402: 99.0010% ( 8) 00:10:39.085 32636.402 - 32846.959: 99.0523% ( 6) 00:10:39.085 32846.959 - 33057.516: 99.1206% ( 8) 00:10:39.085 33057.516 - 33268.074: 99.1803% ( 7) 00:10:39.085 33268.074 - 33478.631: 99.2401% ( 7) 00:10:39.085 33478.631 - 33689.189: 99.2999% ( 7) 00:10:39.085 33689.189 - 33899.746: 99.3596% ( 7) 00:10:39.085 33899.746 - 34110.304: 99.4194% ( 7) 00:10:39.085 34110.304 - 34320.861: 99.4536% ( 4) 00:10:39.085 40637.584 - 40848.141: 99.5133% ( 7) 00:10:39.085 40848.141 - 41058.699: 99.5731% ( 7) 00:10:39.085 41058.699 - 41269.256: 99.6329% ( 7) 00:10:39.085 41269.256 - 41479.814: 99.6926% ( 7) 00:10:39.085 41479.814 - 41690.371: 99.7609% ( 8) 00:10:39.085 41690.371 - 41900.929: 99.8207% ( 7) 00:10:39.085 41900.929 - 42111.486: 99.8890% ( 8) 00:10:39.085 42111.486 - 42322.043: 99.9488% ( 7) 00:10:39.085 42322.043 - 42532.601: 100.0000% ( 6) 00:10:39.085 00:10:39.085 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:39.085 ============================================================================== 00:10:39.085 Range in us Cumulative IO count 00:10:39.085 8369.658 - 8422.297: 0.0085% ( 1) 00:10:39.085 8474.937 - 8527.576: 0.0171% ( 1) 00:10:39.085 8527.576 - 8580.215: 0.0854% ( 8) 00:10:39.085 8580.215 - 8632.855: 0.1878% ( 12) 00:10:39.085 8632.855 - 8685.494: 0.3074% ( 14) 00:10:39.085 8685.494 - 8738.133: 0.5891% ( 33) 00:10:39.085 8738.133 - 8790.773: 1.1270% ( 63) 00:10:39.085 8790.773 - 8843.412: 1.5540% ( 50) 00:10:39.085 8843.412 - 8896.051: 2.3822% ( 97) 00:10:39.085 8896.051 - 8948.691: 3.5178% ( 133) 00:10:39.085 8948.691 - 9001.330: 5.1571% ( 192) 00:10:39.085 9001.330 - 9053.969: 6.7964% ( 192) 00:10:39.085 9053.969 - 9106.609: 8.8371% ( 239) 00:10:39.085 9106.609 - 9159.248: 10.7155% ( 220) 00:10:39.085 9159.248 - 9211.888: 12.7647% ( 240) 00:10:39.085 9211.888 - 9264.527: 14.8566% ( 245) 00:10:39.085 9264.527 - 9317.166: 17.1192% ( 265) 00:10:39.085 9317.166 - 9369.806: 19.7575% ( 309) 00:10:39.085 9369.806 - 9422.445: 22.5666% ( 329) 00:10:39.085 9422.445 - 9475.084: 26.1697% ( 422) 00:10:39.085 9475.084 - 9527.724: 29.7131% ( 415) 00:10:39.085 9527.724 - 9580.363: 33.3675% ( 428) 00:10:39.085 9580.363 - 9633.002: 37.5512% ( 490) 00:10:39.085 9633.002 - 9685.642: 41.2312% ( 431) 00:10:39.085 9685.642 - 9738.281: 44.6977% ( 406) 00:10:39.085 9738.281 - 9790.920: 48.1045% ( 399) 00:10:39.085 9790.920 - 9843.560: 50.9648% ( 335) 00:10:39.085 9843.560 - 9896.199: 53.9874% ( 354) 00:10:39.085 9896.199 - 9948.839: 56.3268% ( 274) 00:10:39.085 9948.839 - 10001.478: 57.8723% ( 181) 00:10:39.085 10001.478 - 10054.117: 59.3408% ( 172) 00:10:39.085 10054.117 - 10106.757: 60.6045% ( 148) 00:10:39.085 10106.757 - 10159.396: 61.7486% ( 134) 00:10:39.085 10159.396 - 10212.035: 62.8928% ( 134) 00:10:39.085 10212.035 - 10264.675: 63.8490% ( 112) 00:10:39.085 10264.675 - 10317.314: 64.7626% ( 107) 00:10:39.085 10317.314 - 10369.953: 65.4884% ( 85) 00:10:39.085 10369.953 - 10422.593: 66.3337% ( 99) 00:10:39.085 10422.593 - 10475.232: 67.2473% ( 107) 00:10:39.085 10475.232 - 10527.871: 68.2889% ( 122) 00:10:39.085 10527.871 - 10580.511: 69.5953% ( 153) 00:10:39.085 10580.511 - 10633.150: 70.6796% ( 127) 00:10:39.085 10633.150 - 10685.790: 71.9092% ( 144) 00:10:39.085 10685.790 - 10738.429: 72.9679% ( 124) 00:10:39.085 10738.429 - 10791.068: 73.9413% ( 114) 00:10:39.085 10791.068 - 10843.708: 74.5219% ( 68) 00:10:39.085 10843.708 - 10896.347: 74.9573% ( 51) 00:10:39.085 10896.347 - 10948.986: 75.2818% ( 38) 00:10:39.085 10948.986 - 11001.626: 75.5464% ( 31) 00:10:39.085 11001.626 - 11054.265: 75.8367% ( 34) 00:10:39.085 11054.265 - 11106.904: 76.2210% ( 45) 00:10:39.085 11106.904 - 11159.544: 76.6393% ( 49) 00:10:39.085 11159.544 - 11212.183: 76.9467% ( 36) 00:10:39.085 11212.183 - 11264.822: 77.1858% ( 28) 00:10:39.085 11264.822 - 11317.462: 77.4505% ( 31) 00:10:39.085 11317.462 - 11370.101: 77.9201% ( 55) 00:10:39.085 11370.101 - 11422.741: 78.2958% ( 44) 00:10:39.085 11422.741 - 11475.380: 78.7227% ( 50) 00:10:39.085 11475.380 - 11528.019: 79.3033% ( 68) 00:10:39.085 11528.019 - 11580.659: 79.9180% ( 72) 00:10:39.085 11580.659 - 11633.298: 80.3962% ( 56) 00:10:39.085 11633.298 - 11685.937: 80.8658% ( 55) 00:10:39.085 11685.937 - 11738.577: 81.2842% ( 49) 00:10:39.085 11738.577 - 11791.216: 81.5745% ( 34) 00:10:39.085 11791.216 - 11843.855: 81.9074% ( 39) 00:10:39.085 11843.855 - 11896.495: 82.2234% ( 37) 00:10:39.085 11896.495 - 11949.134: 82.4966% ( 32) 00:10:39.085 11949.134 - 12001.773: 82.7357% ( 28) 00:10:39.085 12001.773 - 12054.413: 82.8637% ( 15) 00:10:39.085 12054.413 - 12107.052: 82.9918% ( 15) 00:10:39.085 12107.052 - 12159.692: 83.1455% ( 18) 00:10:39.085 12159.692 - 12212.331: 83.2992% ( 18) 00:10:39.085 12212.331 - 12264.970: 83.3760% ( 9) 00:10:39.085 12264.970 - 12317.610: 83.4699% ( 11) 00:10:39.085 12317.610 - 12370.249: 83.5553% ( 10) 00:10:39.085 12370.249 - 12422.888: 83.6663% ( 13) 00:10:39.085 12422.888 - 12475.528: 83.8542% ( 22) 00:10:39.085 12475.528 - 12528.167: 83.9310% ( 9) 00:10:39.085 12528.167 - 12580.806: 83.9737% ( 5) 00:10:39.085 12580.806 - 12633.446: 84.0164% ( 5) 00:10:39.086 12633.446 - 12686.085: 84.0762% ( 7) 00:10:39.086 12686.085 - 12738.724: 84.2298% ( 18) 00:10:39.086 12738.724 - 12791.364: 84.3579% ( 15) 00:10:39.086 12791.364 - 12844.003: 84.4604% ( 12) 00:10:39.086 12844.003 - 12896.643: 84.6397% ( 21) 00:10:39.086 12896.643 - 12949.282: 84.7251% ( 10) 00:10:39.086 12949.282 - 13001.921: 84.9300% ( 24) 00:10:39.086 13001.921 - 13054.561: 85.1605% ( 27) 00:10:39.086 13054.561 - 13107.200: 85.5277% ( 43) 00:10:39.086 13107.200 - 13159.839: 85.8265% ( 35) 00:10:39.086 13159.839 - 13212.479: 86.0912% ( 31) 00:10:39.086 13212.479 - 13265.118: 86.4583% ( 43) 00:10:39.086 13265.118 - 13317.757: 86.7230% ( 31) 00:10:39.086 13317.757 - 13370.397: 86.9194% ( 23) 00:10:39.086 13370.397 - 13423.036: 87.2012% ( 33) 00:10:39.086 13423.036 - 13475.676: 87.5085% ( 36) 00:10:39.086 13475.676 - 13580.954: 88.0891% ( 68) 00:10:39.086 13580.954 - 13686.233: 88.5417% ( 53) 00:10:39.086 13686.233 - 13791.512: 89.0625% ( 61) 00:10:39.086 13791.512 - 13896.790: 89.5663% ( 59) 00:10:39.086 13896.790 - 14002.069: 89.9249% ( 42) 00:10:39.086 14002.069 - 14107.348: 90.2322% ( 36) 00:10:39.086 14107.348 - 14212.627: 90.4628% ( 27) 00:10:39.086 14212.627 - 14317.905: 90.6677% ( 24) 00:10:39.086 14317.905 - 14423.184: 90.8641% ( 23) 00:10:39.086 14423.184 - 14528.463: 91.1117% ( 29) 00:10:39.086 14528.463 - 14633.741: 91.2910% ( 21) 00:10:39.086 14633.741 - 14739.020: 91.3849% ( 11) 00:10:39.086 14739.020 - 14844.299: 91.4703% ( 10) 00:10:39.086 14844.299 - 14949.578: 91.6154% ( 17) 00:10:39.086 14949.578 - 15054.856: 91.7691% ( 18) 00:10:39.086 15054.856 - 15160.135: 91.9057% ( 16) 00:10:39.086 15160.135 - 15265.414: 92.1533% ( 29) 00:10:39.086 15265.414 - 15370.692: 92.5290% ( 44) 00:10:39.086 15370.692 - 15475.971: 92.9474% ( 49) 00:10:39.086 15475.971 - 15581.250: 93.1096% ( 19) 00:10:39.086 15581.250 - 15686.529: 93.1950% ( 10) 00:10:39.086 15686.529 - 15791.807: 93.2719% ( 9) 00:10:39.086 15791.807 - 15897.086: 93.3999% ( 15) 00:10:39.086 15897.086 - 16002.365: 93.5878% ( 22) 00:10:39.086 16002.365 - 16107.643: 93.7415% ( 18) 00:10:39.086 16107.643 - 16212.922: 93.8866% ( 17) 00:10:39.086 16212.922 - 16318.201: 94.0574% ( 20) 00:10:39.086 16318.201 - 16423.480: 94.3989% ( 40) 00:10:39.086 16423.480 - 16528.758: 94.9795% ( 68) 00:10:39.086 16528.758 - 16634.037: 95.3381% ( 42) 00:10:39.086 16634.037 - 16739.316: 95.8931% ( 65) 00:10:39.086 16739.316 - 16844.594: 96.1066% ( 25) 00:10:39.086 16844.594 - 16949.873: 96.3200% ( 25) 00:10:39.086 16949.873 - 17055.152: 96.5676% ( 29) 00:10:39.086 17055.152 - 17160.431: 96.7811% ( 25) 00:10:39.086 17160.431 - 17265.709: 97.0202% ( 28) 00:10:39.086 17265.709 - 17370.988: 97.2421% ( 26) 00:10:39.086 17370.988 - 17476.267: 97.4471% ( 24) 00:10:39.086 17476.267 - 17581.545: 97.6178% ( 20) 00:10:39.086 17581.545 - 17686.824: 97.7544% ( 16) 00:10:39.086 17686.824 - 17792.103: 97.8484% ( 11) 00:10:39.086 17792.103 - 17897.382: 97.9423% ( 11) 00:10:39.086 17897.382 - 18002.660: 98.0447% ( 12) 00:10:39.086 18002.660 - 18107.939: 98.1472% ( 12) 00:10:39.086 18107.939 - 18213.218: 98.2326% ( 10) 00:10:39.086 18213.218 - 18318.496: 98.3094% ( 9) 00:10:39.086 18318.496 - 18423.775: 98.3607% ( 6) 00:10:39.086 18950.169 - 19055.447: 98.4204% ( 7) 00:10:39.086 19055.447 - 19160.726: 98.4973% ( 9) 00:10:39.086 19160.726 - 19266.005: 98.5314% ( 4) 00:10:39.086 19266.005 - 19371.284: 98.5570% ( 3) 00:10:39.086 19371.284 - 19476.562: 98.5827% ( 3) 00:10:39.086 19476.562 - 19581.841: 98.6168% ( 4) 00:10:39.086 19581.841 - 19687.120: 98.6510% ( 4) 00:10:39.086 19687.120 - 19792.398: 98.6851% ( 4) 00:10:39.086 19792.398 - 19897.677: 98.7193% ( 4) 00:10:39.086 19897.677 - 20002.956: 98.7534% ( 4) 00:10:39.086 20002.956 - 20108.235: 98.7876% ( 4) 00:10:39.086 20108.235 - 20213.513: 98.8303% ( 5) 00:10:39.086 20213.513 - 20318.792: 98.8644% ( 4) 00:10:39.086 20318.792 - 20424.071: 98.8986% ( 4) 00:10:39.086 20424.071 - 20529.349: 98.9071% ( 1) 00:10:39.086 31373.057 - 31583.614: 98.9242% ( 2) 00:10:39.086 31583.614 - 31794.172: 98.9839% ( 7) 00:10:39.086 31794.172 - 32004.729: 99.0437% ( 7) 00:10:39.086 32004.729 - 32215.287: 99.1120% ( 8) 00:10:39.086 32215.287 - 32425.844: 99.1803% ( 8) 00:10:39.086 32425.844 - 32636.402: 99.2401% ( 7) 00:10:39.086 32636.402 - 32846.959: 99.3084% ( 8) 00:10:39.086 32846.959 - 33057.516: 99.3682% ( 7) 00:10:39.086 33057.516 - 33268.074: 99.4279% ( 7) 00:10:39.086 33268.074 - 33478.631: 99.4536% ( 3) 00:10:39.086 39374.239 - 39584.797: 99.4706% ( 2) 00:10:39.086 39584.797 - 39795.354: 99.5389% ( 8) 00:10:39.086 39795.354 - 40005.912: 99.5902% ( 6) 00:10:39.086 40005.912 - 40216.469: 99.6585% ( 8) 00:10:39.086 40216.469 - 40427.027: 99.7182% ( 7) 00:10:39.086 40427.027 - 40637.584: 99.7695% ( 6) 00:10:39.086 40637.584 - 40848.141: 99.8378% ( 8) 00:10:39.086 40848.141 - 41058.699: 99.8975% ( 7) 00:10:39.086 41058.699 - 41269.256: 99.9573% ( 7) 00:10:39.086 41269.256 - 41479.814: 100.0000% ( 5) 00:10:39.086 00:10:39.086 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:39.086 ============================================================================== 00:10:39.086 Range in us Cumulative IO count 00:10:39.086 8211.740 - 8264.379: 0.0085% ( 1) 00:10:39.086 8422.297 - 8474.937: 0.0512% ( 5) 00:10:39.086 8474.937 - 8527.576: 0.1366% ( 10) 00:10:39.086 8527.576 - 8580.215: 0.2561% ( 14) 00:10:39.086 8580.215 - 8632.855: 0.3757% ( 14) 00:10:39.086 8632.855 - 8685.494: 0.6062% ( 27) 00:10:39.086 8685.494 - 8738.133: 0.9392% ( 39) 00:10:39.086 8738.133 - 8790.773: 1.5369% ( 70) 00:10:39.086 8790.773 - 8843.412: 2.1687% ( 74) 00:10:39.086 8843.412 - 8896.051: 3.0567% ( 104) 00:10:39.086 8896.051 - 8948.691: 4.0386% ( 115) 00:10:39.086 8948.691 - 9001.330: 5.0717% ( 121) 00:10:39.086 9001.330 - 9053.969: 6.4635% ( 163) 00:10:39.086 9053.969 - 9106.609: 7.9662% ( 176) 00:10:39.086 9106.609 - 9159.248: 9.6995% ( 203) 00:10:39.086 9159.248 - 9211.888: 11.6889% ( 233) 00:10:39.086 9211.888 - 9264.527: 14.3784% ( 315) 00:10:39.086 9264.527 - 9317.166: 16.8460% ( 289) 00:10:39.086 9317.166 - 9369.806: 19.6380% ( 327) 00:10:39.086 9369.806 - 9422.445: 22.6605% ( 354) 00:10:39.086 9422.445 - 9475.084: 26.2124% ( 416) 00:10:39.086 9475.084 - 9527.724: 29.7729% ( 417) 00:10:39.086 9527.724 - 9580.363: 33.4529% ( 431) 00:10:39.086 9580.363 - 9633.002: 37.6708% ( 494) 00:10:39.086 9633.002 - 9685.642: 41.1117% ( 403) 00:10:39.086 9685.642 - 9738.281: 44.5441% ( 402) 00:10:39.086 9738.281 - 9790.920: 47.8142% ( 383) 00:10:39.086 9790.920 - 9843.560: 50.7172% ( 340) 00:10:39.086 9843.560 - 9896.199: 53.5775% ( 335) 00:10:39.086 9896.199 - 9948.839: 56.0707% ( 292) 00:10:39.086 9948.839 - 10001.478: 57.7698% ( 199) 00:10:39.086 10001.478 - 10054.117: 59.2128% ( 169) 00:10:39.086 10054.117 - 10106.757: 60.2544% ( 122) 00:10:39.086 10106.757 - 10159.396: 61.0656% ( 95) 00:10:39.086 10159.396 - 10212.035: 61.9109% ( 99) 00:10:39.086 10212.035 - 10264.675: 62.7135% ( 94) 00:10:39.086 10264.675 - 10317.314: 63.7637% ( 123) 00:10:39.086 10317.314 - 10369.953: 64.6687% ( 106) 00:10:39.086 10369.953 - 10422.593: 65.6335% ( 113) 00:10:39.086 10422.593 - 10475.232: 66.8204% ( 139) 00:10:39.086 10475.232 - 10527.871: 67.8449% ( 120) 00:10:39.086 10527.871 - 10580.511: 68.7927% ( 111) 00:10:39.086 10580.511 - 10633.150: 70.0307% ( 145) 00:10:39.086 10633.150 - 10685.790: 70.9785% ( 111) 00:10:39.086 10685.790 - 10738.429: 71.8579% ( 103) 00:10:39.086 10738.429 - 10791.068: 72.7544% ( 105) 00:10:39.086 10791.068 - 10843.708: 73.5997% ( 99) 00:10:39.086 10843.708 - 10896.347: 74.1974% ( 70) 00:10:39.086 10896.347 - 10948.986: 74.8890% ( 81) 00:10:39.086 10948.986 - 11001.626: 75.4098% ( 61) 00:10:39.086 11001.626 - 11054.265: 75.7855% ( 44) 00:10:39.086 11054.265 - 11106.904: 76.1356% ( 41) 00:10:39.086 11106.904 - 11159.544: 76.5967% ( 54) 00:10:39.086 11159.544 - 11212.183: 77.1175% ( 61) 00:10:39.086 11212.183 - 11264.822: 77.4761% ( 42) 00:10:39.086 11264.822 - 11317.462: 77.8432% ( 43) 00:10:39.086 11317.462 - 11370.101: 78.3555% ( 60) 00:10:39.086 11370.101 - 11422.741: 78.7910% ( 51) 00:10:39.086 11422.741 - 11475.380: 79.2179% ( 50) 00:10:39.086 11475.380 - 11528.019: 79.7046% ( 57) 00:10:39.086 11528.019 - 11580.659: 80.2169% ( 60) 00:10:39.086 11580.659 - 11633.298: 80.6011% ( 45) 00:10:39.086 11633.298 - 11685.937: 80.8231% ( 26) 00:10:39.086 11685.937 - 11738.577: 81.0109% ( 22) 00:10:39.086 11738.577 - 11791.216: 81.2244% ( 25) 00:10:39.086 11791.216 - 11843.855: 81.3952% ( 20) 00:10:39.086 11843.855 - 11896.495: 81.6940% ( 35) 00:10:39.086 11896.495 - 11949.134: 81.8562% ( 19) 00:10:39.086 11949.134 - 12001.773: 82.0099% ( 18) 00:10:39.086 12001.773 - 12054.413: 82.1977% ( 22) 00:10:39.086 12054.413 - 12107.052: 82.3429% ( 17) 00:10:39.086 12107.052 - 12159.692: 82.4027% ( 7) 00:10:39.086 12159.692 - 12212.331: 82.4966% ( 11) 00:10:39.086 12212.331 - 12264.970: 82.5905% ( 11) 00:10:39.086 12264.970 - 12317.610: 82.7442% ( 18) 00:10:39.086 12317.610 - 12370.249: 82.9662% ( 26) 00:10:39.086 12370.249 - 12422.888: 83.2053% ( 28) 00:10:39.086 12422.888 - 12475.528: 83.3675% ( 19) 00:10:39.086 12475.528 - 12528.167: 83.5553% ( 22) 00:10:39.086 12528.167 - 12580.806: 83.7346% ( 21) 00:10:39.086 12580.806 - 12633.446: 83.9225% ( 22) 00:10:39.086 12633.446 - 12686.085: 84.1274% ( 24) 00:10:39.086 12686.085 - 12738.724: 84.3067% ( 21) 00:10:39.086 12738.724 - 12791.364: 84.7080% ( 47) 00:10:39.086 12791.364 - 12844.003: 84.9898% ( 33) 00:10:39.086 12844.003 - 12896.643: 85.3313% ( 40) 00:10:39.086 12896.643 - 12949.282: 85.5704% ( 28) 00:10:39.086 12949.282 - 13001.921: 85.8094% ( 28) 00:10:39.086 13001.921 - 13054.561: 86.0058% ( 23) 00:10:39.086 13054.561 - 13107.200: 86.1168% ( 13) 00:10:39.086 13107.200 - 13159.839: 86.1936% ( 9) 00:10:39.087 13159.839 - 13212.479: 86.2961% ( 12) 00:10:39.087 13212.479 - 13265.118: 86.3986% ( 12) 00:10:39.087 13265.118 - 13317.757: 86.5181% ( 14) 00:10:39.087 13317.757 - 13370.397: 86.6462% ( 15) 00:10:39.087 13370.397 - 13423.036: 86.7828% ( 16) 00:10:39.087 13423.036 - 13475.676: 86.9023% ( 14) 00:10:39.087 13475.676 - 13580.954: 87.1755% ( 32) 00:10:39.087 13580.954 - 13686.233: 87.5768% ( 47) 00:10:39.087 13686.233 - 13791.512: 88.0806% ( 59) 00:10:39.087 13791.512 - 13896.790: 88.5587% ( 56) 00:10:39.087 13896.790 - 14002.069: 89.0881% ( 62) 00:10:39.087 14002.069 - 14107.348: 89.6773% ( 69) 00:10:39.087 14107.348 - 14212.627: 90.4542% ( 91) 00:10:39.087 14212.627 - 14317.905: 90.9921% ( 63) 00:10:39.087 14317.905 - 14423.184: 91.3251% ( 39) 00:10:39.087 14423.184 - 14528.463: 91.5557% ( 27) 00:10:39.087 14528.463 - 14633.741: 91.7179% ( 19) 00:10:39.087 14633.741 - 14739.020: 91.8545% ( 16) 00:10:39.087 14739.020 - 14844.299: 92.2046% ( 41) 00:10:39.087 14844.299 - 14949.578: 92.4095% ( 24) 00:10:39.087 14949.578 - 15054.856: 92.5546% ( 17) 00:10:39.087 15054.856 - 15160.135: 92.6059% ( 6) 00:10:39.087 15160.135 - 15265.414: 92.6571% ( 6) 00:10:39.087 15265.414 - 15370.692: 92.7083% ( 6) 00:10:39.087 15370.692 - 15475.971: 92.7596% ( 6) 00:10:39.087 15475.971 - 15581.250: 92.8876% ( 15) 00:10:39.087 15581.250 - 15686.529: 93.0413% ( 18) 00:10:39.087 15686.529 - 15791.807: 93.1523% ( 13) 00:10:39.087 15791.807 - 15897.086: 93.3829% ( 27) 00:10:39.087 15897.086 - 16002.365: 93.6988% ( 37) 00:10:39.087 16002.365 - 16107.643: 94.0061% ( 36) 00:10:39.087 16107.643 - 16212.922: 94.2281% ( 26) 00:10:39.087 16212.922 - 16318.201: 94.5099% ( 33) 00:10:39.087 16318.201 - 16423.480: 94.7917% ( 33) 00:10:39.087 16423.480 - 16528.758: 95.1161% ( 38) 00:10:39.087 16528.758 - 16634.037: 95.4320% ( 37) 00:10:39.087 16634.037 - 16739.316: 95.6967% ( 31) 00:10:39.087 16739.316 - 16844.594: 95.8504% ( 18) 00:10:39.087 16844.594 - 16949.873: 96.1663% ( 37) 00:10:39.087 16949.873 - 17055.152: 96.4908% ( 38) 00:10:39.087 17055.152 - 17160.431: 96.8323% ( 40) 00:10:39.087 17160.431 - 17265.709: 97.2678% ( 51) 00:10:39.087 17265.709 - 17370.988: 97.5666% ( 35) 00:10:39.087 17370.988 - 17476.267: 97.9337% ( 43) 00:10:39.087 17476.267 - 17581.545: 98.1301% ( 23) 00:10:39.087 17581.545 - 17686.824: 98.2497% ( 14) 00:10:39.087 17686.824 - 17792.103: 98.3265% ( 9) 00:10:39.087 17792.103 - 17897.382: 98.3607% ( 4) 00:10:39.087 18634.333 - 18739.611: 98.4033% ( 5) 00:10:39.087 18739.611 - 18844.890: 98.4887% ( 10) 00:10:39.087 18844.890 - 18950.169: 98.5314% ( 5) 00:10:39.087 18950.169 - 19055.447: 98.5485% ( 2) 00:10:39.087 19055.447 - 19160.726: 98.5741% ( 3) 00:10:39.087 19160.726 - 19266.005: 98.6168% ( 5) 00:10:39.087 19266.005 - 19371.284: 98.6510% ( 4) 00:10:39.087 19371.284 - 19476.562: 98.6851% ( 4) 00:10:39.087 19476.562 - 19581.841: 98.7193% ( 4) 00:10:39.087 19581.841 - 19687.120: 98.7534% ( 4) 00:10:39.087 19687.120 - 19792.398: 98.7961% ( 5) 00:10:39.087 19792.398 - 19897.677: 98.8303% ( 4) 00:10:39.087 19897.677 - 20002.956: 98.8559% ( 3) 00:10:39.087 20002.956 - 20108.235: 98.8900% ( 4) 00:10:39.087 20108.235 - 20213.513: 98.9071% ( 2) 00:10:39.087 29688.598 - 29899.155: 98.9327% ( 3) 00:10:39.087 29899.155 - 30109.712: 99.0010% ( 8) 00:10:39.087 30109.712 - 30320.270: 99.0608% ( 7) 00:10:39.087 30320.270 - 30530.827: 99.1291% ( 8) 00:10:39.087 30530.827 - 30741.385: 99.1889% ( 7) 00:10:39.087 30741.385 - 30951.942: 99.2572% ( 8) 00:10:39.087 30951.942 - 31162.500: 99.3084% ( 6) 00:10:39.087 31162.500 - 31373.057: 99.3767% ( 8) 00:10:39.087 31373.057 - 31583.614: 99.4365% ( 7) 00:10:39.087 31583.614 - 31794.172: 99.4536% ( 2) 00:10:39.087 37268.665 - 37479.222: 99.4962% ( 5) 00:10:39.087 37479.222 - 37689.780: 99.5560% ( 7) 00:10:39.087 37689.780 - 37900.337: 99.6158% ( 7) 00:10:39.087 37900.337 - 38110.895: 99.6755% ( 7) 00:10:39.087 38110.895 - 38321.452: 99.7268% ( 6) 00:10:39.087 38321.452 - 38532.010: 99.7865% ( 7) 00:10:39.087 38532.010 - 38742.567: 99.8463% ( 7) 00:10:39.087 38742.567 - 38953.124: 99.9146% ( 8) 00:10:39.087 38953.124 - 39163.682: 99.9744% ( 7) 00:10:39.087 39163.682 - 39374.239: 100.0000% ( 3) 00:10:39.087 00:10:39.087 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:39.087 ============================================================================== 00:10:39.087 Range in us Cumulative IO count 00:10:39.087 8264.379 - 8317.018: 0.0171% ( 2) 00:10:39.087 8369.658 - 8422.297: 0.0256% ( 1) 00:10:39.087 8422.297 - 8474.937: 0.0427% ( 2) 00:10:39.087 8474.937 - 8527.576: 0.1195% ( 9) 00:10:39.087 8527.576 - 8580.215: 0.2561% ( 16) 00:10:39.087 8580.215 - 8632.855: 0.4696% ( 25) 00:10:39.087 8632.855 - 8685.494: 0.8197% ( 41) 00:10:39.087 8685.494 - 8738.133: 1.1868% ( 43) 00:10:39.087 8738.133 - 8790.773: 1.6137% ( 50) 00:10:39.087 8790.773 - 8843.412: 2.1004% ( 57) 00:10:39.087 8843.412 - 8896.051: 2.6895% ( 69) 00:10:39.087 8896.051 - 8948.691: 3.4921% ( 94) 00:10:39.087 8948.691 - 9001.330: 4.3545% ( 101) 00:10:39.087 9001.330 - 9053.969: 5.8316% ( 173) 00:10:39.087 9053.969 - 9106.609: 7.4368% ( 188) 00:10:39.087 9106.609 - 9159.248: 9.3494% ( 224) 00:10:39.087 9159.248 - 9211.888: 11.6889% ( 274) 00:10:39.087 9211.888 - 9264.527: 14.1052% ( 283) 00:10:39.087 9264.527 - 9317.166: 16.4874% ( 279) 00:10:39.087 9317.166 - 9369.806: 19.8429% ( 393) 00:10:39.087 9369.806 - 9422.445: 23.3436% ( 410) 00:10:39.087 9422.445 - 9475.084: 26.8357% ( 409) 00:10:39.087 9475.084 - 9527.724: 30.7462% ( 458) 00:10:39.087 9527.724 - 9580.363: 34.1445% ( 398) 00:10:39.087 9580.363 - 9633.002: 37.5598% ( 400) 00:10:39.087 9633.002 - 9685.642: 41.2227% ( 429) 00:10:39.087 9685.642 - 9738.281: 44.6124% ( 397) 00:10:39.087 9738.281 - 9790.920: 47.6434% ( 355) 00:10:39.087 9790.920 - 9843.560: 50.8197% ( 372) 00:10:39.087 9843.560 - 9896.199: 53.6971% ( 337) 00:10:39.087 9896.199 - 9948.839: 55.8829% ( 256) 00:10:39.087 9948.839 - 10001.478: 57.6588% ( 208) 00:10:39.087 10001.478 - 10054.117: 59.5031% ( 216) 00:10:39.087 10054.117 - 10106.757: 60.6130% ( 130) 00:10:39.087 10106.757 - 10159.396: 61.7657% ( 135) 00:10:39.087 10159.396 - 10212.035: 62.8074% ( 122) 00:10:39.087 10212.035 - 10264.675: 63.4648% ( 77) 00:10:39.087 10264.675 - 10317.314: 64.3613% ( 105) 00:10:39.087 10317.314 - 10369.953: 65.2237% ( 101) 00:10:39.087 10369.953 - 10422.593: 65.8982% ( 79) 00:10:39.087 10422.593 - 10475.232: 66.7606% ( 101) 00:10:39.087 10475.232 - 10527.871: 67.6656% ( 106) 00:10:39.087 10527.871 - 10580.511: 68.7158% ( 123) 00:10:39.087 10580.511 - 10633.150: 69.7917% ( 126) 00:10:39.087 10633.150 - 10685.790: 70.7223% ( 109) 00:10:39.087 10685.790 - 10738.429: 72.0031% ( 150) 00:10:39.087 10738.429 - 10791.068: 72.9764% ( 114) 00:10:39.087 10791.068 - 10843.708: 73.7107% ( 86) 00:10:39.087 10843.708 - 10896.347: 74.3340% ( 73) 00:10:39.087 10896.347 - 10948.986: 74.8292% ( 58) 00:10:39.087 10948.986 - 11001.626: 75.2988% ( 55) 00:10:39.087 11001.626 - 11054.265: 75.9307% ( 74) 00:10:39.087 11054.265 - 11106.904: 76.4088% ( 56) 00:10:39.087 11106.904 - 11159.544: 76.9382% ( 62) 00:10:39.087 11159.544 - 11212.183: 77.4419% ( 59) 00:10:39.087 11212.183 - 11264.822: 77.7835% ( 40) 00:10:39.087 11264.822 - 11317.462: 78.2189% ( 51) 00:10:39.087 11317.462 - 11370.101: 78.4665% ( 29) 00:10:39.087 11370.101 - 11422.741: 78.8081% ( 40) 00:10:39.087 11422.741 - 11475.380: 79.1154% ( 36) 00:10:39.087 11475.380 - 11528.019: 79.4911% ( 44) 00:10:39.087 11528.019 - 11580.659: 79.7302% ( 28) 00:10:39.087 11580.659 - 11633.298: 79.9949% ( 31) 00:10:39.087 11633.298 - 11685.937: 80.2937% ( 35) 00:10:39.087 11685.937 - 11738.577: 80.5840% ( 34) 00:10:39.087 11738.577 - 11791.216: 80.9512% ( 43) 00:10:39.087 11791.216 - 11843.855: 81.1561% ( 24) 00:10:39.087 11843.855 - 11896.495: 81.4378% ( 33) 00:10:39.087 11896.495 - 11949.134: 81.6428% ( 24) 00:10:39.087 11949.134 - 12001.773: 81.7452% ( 12) 00:10:39.087 12001.773 - 12054.413: 81.8391% ( 11) 00:10:39.087 12054.413 - 12107.052: 81.8904% ( 6) 00:10:39.087 12107.052 - 12159.692: 81.9587% ( 8) 00:10:39.087 12159.692 - 12212.331: 82.0953% ( 16) 00:10:39.087 12212.331 - 12264.970: 82.2575% ( 19) 00:10:39.087 12264.970 - 12317.610: 82.4027% ( 17) 00:10:39.087 12317.610 - 12370.249: 82.5393% ( 16) 00:10:39.087 12370.249 - 12422.888: 82.6844% ( 17) 00:10:39.088 12422.888 - 12475.528: 82.7869% ( 12) 00:10:39.088 12475.528 - 12528.167: 82.9662% ( 21) 00:10:39.088 12528.167 - 12580.806: 83.2138% ( 29) 00:10:39.088 12580.806 - 12633.446: 83.3163% ( 12) 00:10:39.088 12633.446 - 12686.085: 83.4187% ( 12) 00:10:39.088 12686.085 - 12738.724: 83.5809% ( 19) 00:10:39.088 12738.724 - 12791.364: 83.8029% ( 26) 00:10:39.088 12791.364 - 12844.003: 84.0249% ( 26) 00:10:39.088 12844.003 - 12896.643: 84.2555% ( 27) 00:10:39.088 12896.643 - 12949.282: 84.7080% ( 53) 00:10:39.088 12949.282 - 13001.921: 85.0068% ( 35) 00:10:39.088 13001.921 - 13054.561: 85.2630% ( 30) 00:10:39.088 13054.561 - 13107.200: 85.4764% ( 25) 00:10:39.088 13107.200 - 13159.839: 85.7155% ( 28) 00:10:39.088 13159.839 - 13212.479: 85.9204% ( 24) 00:10:39.088 13212.479 - 13265.118: 86.0827% ( 19) 00:10:39.088 13265.118 - 13317.757: 86.2534% ( 20) 00:10:39.088 13317.757 - 13370.397: 86.4242% ( 20) 00:10:39.088 13370.397 - 13423.036: 86.6206% ( 23) 00:10:39.088 13423.036 - 13475.676: 86.7999% ( 21) 00:10:39.088 13475.676 - 13580.954: 87.2439% ( 52) 00:10:39.088 13580.954 - 13686.233: 87.7049% ( 54) 00:10:39.088 13686.233 - 13791.512: 88.1489% ( 52) 00:10:39.088 13791.512 - 13896.790: 88.3794% ( 27) 00:10:39.088 13896.790 - 14002.069: 88.6441% ( 31) 00:10:39.088 14002.069 - 14107.348: 88.9259% ( 33) 00:10:39.088 14107.348 - 14212.627: 89.3784% ( 53) 00:10:39.088 14212.627 - 14317.905: 89.9676% ( 69) 00:10:39.088 14317.905 - 14423.184: 90.6250% ( 77) 00:10:39.088 14423.184 - 14528.463: 91.2056% ( 68) 00:10:39.088 14528.463 - 14633.741: 91.6923% ( 57) 00:10:39.088 14633.741 - 14739.020: 92.0936% ( 47) 00:10:39.088 14739.020 - 14844.299: 92.4863% ( 46) 00:10:39.088 14844.299 - 14949.578: 92.7254% ( 28) 00:10:39.088 14949.578 - 15054.856: 92.9218% ( 23) 00:10:39.088 15054.856 - 15160.135: 93.1011% ( 21) 00:10:39.088 15160.135 - 15265.414: 93.2121% ( 13) 00:10:39.088 15265.414 - 15370.692: 93.3316% ( 14) 00:10:39.088 15370.692 - 15475.971: 93.5365% ( 24) 00:10:39.088 15475.971 - 15581.250: 93.6988% ( 19) 00:10:39.088 15581.250 - 15686.529: 93.8268% ( 15) 00:10:39.088 15686.529 - 15791.807: 94.0745% ( 29) 00:10:39.088 15791.807 - 15897.086: 94.2794% ( 24) 00:10:39.088 15897.086 - 16002.365: 94.4416% ( 19) 00:10:39.088 16002.365 - 16107.643: 94.6038% ( 19) 00:10:39.088 16107.643 - 16212.922: 94.8258% ( 26) 00:10:39.088 16212.922 - 16318.201: 95.0137% ( 22) 00:10:39.088 16318.201 - 16423.480: 95.1161% ( 12) 00:10:39.088 16423.480 - 16528.758: 95.2186% ( 12) 00:10:39.088 16528.758 - 16634.037: 95.4150% ( 23) 00:10:39.088 16634.037 - 16739.316: 95.6540% ( 28) 00:10:39.088 16739.316 - 16844.594: 95.9956% ( 40) 00:10:39.088 16844.594 - 16949.873: 96.3200% ( 38) 00:10:39.088 16949.873 - 17055.152: 96.5762% ( 30) 00:10:39.088 17055.152 - 17160.431: 96.7725% ( 23) 00:10:39.088 17160.431 - 17265.709: 96.9348% ( 19) 00:10:39.088 17265.709 - 17370.988: 97.1055% ( 20) 00:10:39.088 17370.988 - 17476.267: 97.5068% ( 47) 00:10:39.088 17476.267 - 17581.545: 97.6605% ( 18) 00:10:39.088 17581.545 - 17686.824: 97.8484% ( 22) 00:10:39.088 17686.824 - 17792.103: 98.0020% ( 18) 00:10:39.088 17792.103 - 17897.382: 98.2070% ( 24) 00:10:39.088 17897.382 - 18002.660: 98.2838% ( 9) 00:10:39.088 18002.660 - 18107.939: 98.3436% ( 7) 00:10:39.088 18107.939 - 18213.218: 98.4033% ( 7) 00:10:39.088 18213.218 - 18318.496: 98.4887% ( 10) 00:10:39.088 18318.496 - 18423.775: 98.5229% ( 4) 00:10:39.088 18423.775 - 18529.054: 98.5485% ( 3) 00:10:39.088 18529.054 - 18634.333: 98.5741% ( 3) 00:10:39.088 18634.333 - 18739.611: 98.6083% ( 4) 00:10:39.088 18739.611 - 18844.890: 98.6424% ( 4) 00:10:39.088 18844.890 - 18950.169: 98.6851% ( 5) 00:10:39.088 18950.169 - 19055.447: 98.7193% ( 4) 00:10:39.088 19055.447 - 19160.726: 98.7534% ( 4) 00:10:39.088 19160.726 - 19266.005: 98.7876% ( 4) 00:10:39.088 19266.005 - 19371.284: 98.8217% ( 4) 00:10:39.088 19371.284 - 19476.562: 98.8559% ( 4) 00:10:39.088 19476.562 - 19581.841: 98.8900% ( 4) 00:10:39.088 19581.841 - 19687.120: 98.9071% ( 2) 00:10:39.088 28004.138 - 28214.696: 98.9669% ( 7) 00:10:39.088 28214.696 - 28425.253: 99.0266% ( 7) 00:10:39.088 28425.253 - 28635.810: 99.0864% ( 7) 00:10:39.088 28635.810 - 28846.368: 99.1462% ( 7) 00:10:39.088 28846.368 - 29056.925: 99.2059% ( 7) 00:10:39.088 29056.925 - 29267.483: 99.2657% ( 7) 00:10:39.088 29267.483 - 29478.040: 99.3255% ( 7) 00:10:39.088 29478.040 - 29688.598: 99.3938% ( 8) 00:10:39.088 29688.598 - 29899.155: 99.4536% ( 7) 00:10:39.088 35584.206 - 35794.763: 99.4962% ( 5) 00:10:39.088 35794.763 - 36005.320: 99.5560% ( 7) 00:10:39.088 36005.320 - 36215.878: 99.6072% ( 6) 00:10:39.088 36215.878 - 36426.435: 99.6670% ( 7) 00:10:39.088 36426.435 - 36636.993: 99.7353% ( 8) 00:10:39.088 36636.993 - 36847.550: 99.7951% ( 7) 00:10:39.088 36847.550 - 37058.108: 99.8548% ( 7) 00:10:39.088 37058.108 - 37268.665: 99.9232% ( 8) 00:10:39.088 37268.665 - 37479.222: 99.9829% ( 7) 00:10:39.088 37479.222 - 37689.780: 100.0000% ( 2) 00:10:39.088 00:10:39.088 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:39.088 ============================================================================== 00:10:39.088 Range in us Cumulative IO count 00:10:39.088 8369.658 - 8422.297: 0.0085% ( 1) 00:10:39.088 8422.297 - 8474.937: 0.0340% ( 3) 00:10:39.088 8474.937 - 8527.576: 0.0764% ( 5) 00:10:39.088 8527.576 - 8580.215: 0.1529% ( 9) 00:10:39.088 8580.215 - 8632.855: 0.2972% ( 17) 00:10:39.088 8632.855 - 8685.494: 0.4671% ( 20) 00:10:39.088 8685.494 - 8738.133: 0.7048% ( 28) 00:10:39.088 8738.133 - 8790.773: 1.0190% ( 37) 00:10:39.088 8790.773 - 8843.412: 1.2483% ( 27) 00:10:39.088 8843.412 - 8896.051: 1.7748% ( 62) 00:10:39.088 8896.051 - 8948.691: 2.6240% ( 100) 00:10:39.088 8948.691 - 9001.330: 3.7109% ( 128) 00:10:39.088 9001.330 - 9053.969: 5.1800% ( 173) 00:10:39.088 9053.969 - 9106.609: 7.4474% ( 267) 00:10:39.088 9106.609 - 9159.248: 9.6892% ( 264) 00:10:39.088 9159.248 - 9211.888: 12.0329% ( 276) 00:10:39.088 9211.888 - 9264.527: 14.7418% ( 319) 00:10:39.088 9264.527 - 9317.166: 17.5526% ( 331) 00:10:39.088 9317.166 - 9369.806: 20.3125% ( 325) 00:10:39.088 9369.806 - 9422.445: 23.5394% ( 380) 00:10:39.088 9422.445 - 9475.084: 27.5645% ( 474) 00:10:39.088 9475.084 - 9527.724: 31.2245% ( 431) 00:10:39.088 9527.724 - 9580.363: 35.2921% ( 479) 00:10:39.088 9580.363 - 9633.002: 39.2748% ( 469) 00:10:39.088 9633.002 - 9685.642: 42.8838% ( 425) 00:10:39.088 9685.642 - 9738.281: 46.2721% ( 399) 00:10:39.088 9738.281 - 9790.920: 49.0065% ( 322) 00:10:39.088 9790.920 - 9843.560: 52.0550% ( 359) 00:10:39.088 9843.560 - 9896.199: 54.4158% ( 278) 00:10:39.088 9896.199 - 9948.839: 56.1226% ( 201) 00:10:39.088 9948.839 - 10001.478: 57.8719% ( 206) 00:10:39.088 10001.478 - 10054.117: 59.3071% ( 169) 00:10:39.088 10054.117 - 10106.757: 60.6233% ( 155) 00:10:39.088 10106.757 - 10159.396: 61.6338% ( 119) 00:10:39.088 10159.396 - 10212.035: 62.3302% ( 82) 00:10:39.088 10212.035 - 10264.675: 62.9671% ( 75) 00:10:39.088 10264.675 - 10317.314: 63.5954% ( 74) 00:10:39.088 10317.314 - 10369.953: 64.3003% ( 83) 00:10:39.088 10369.953 - 10422.593: 64.8777% ( 68) 00:10:39.088 10422.593 - 10475.232: 65.5825% ( 83) 00:10:39.088 10475.232 - 10527.871: 66.4742% ( 105) 00:10:39.088 10527.871 - 10580.511: 67.6970% ( 144) 00:10:39.088 10580.511 - 10633.150: 68.6226% ( 109) 00:10:39.088 10633.150 - 10685.790: 69.7690% ( 135) 00:10:39.088 10685.790 - 10738.429: 70.6692% ( 106) 00:10:39.088 10738.429 - 10791.068: 71.6202% ( 112) 00:10:39.088 10791.068 - 10843.708: 72.4270% ( 95) 00:10:39.088 10843.708 - 10896.347: 73.1148% ( 81) 00:10:39.088 10896.347 - 10948.986: 73.8281% ( 84) 00:10:39.088 10948.986 - 11001.626: 74.3886% ( 66) 00:10:39.088 11001.626 - 11054.265: 74.9915% ( 71) 00:10:39.088 11054.265 - 11106.904: 75.6878% ( 82) 00:10:39.088 11106.904 - 11159.544: 76.3587% ( 79) 00:10:39.088 11159.544 - 11212.183: 76.9786% ( 73) 00:10:39.088 11212.183 - 11264.822: 77.5985% ( 73) 00:10:39.088 11264.822 - 11317.462: 77.9467% ( 41) 00:10:39.088 11317.462 - 11370.101: 78.2269% ( 33) 00:10:39.088 11370.101 - 11422.741: 78.6260% ( 47) 00:10:39.088 11422.741 - 11475.380: 78.8213% ( 23) 00:10:39.088 11475.380 - 11528.019: 79.0082% ( 22) 00:10:39.088 11528.019 - 11580.659: 79.1780% ( 20) 00:10:39.088 11580.659 - 11633.298: 79.3563% ( 21) 00:10:39.088 11633.298 - 11685.937: 79.5346% ( 21) 00:10:39.088 11685.937 - 11738.577: 79.7130% ( 21) 00:10:39.088 11738.577 - 11791.216: 79.9423% ( 27) 00:10:39.088 11791.216 - 11843.855: 80.1630% ( 26) 00:10:39.088 11843.855 - 11896.495: 80.3668% ( 24) 00:10:39.088 11896.495 - 11949.134: 80.5707% ( 24) 00:10:39.088 11949.134 - 12001.773: 80.8084% ( 28) 00:10:39.088 12001.773 - 12054.413: 81.0971% ( 34) 00:10:39.088 12054.413 - 12107.052: 81.3689% ( 32) 00:10:39.088 12107.052 - 12159.692: 81.6916% ( 38) 00:10:39.088 12159.692 - 12212.331: 81.9973% ( 36) 00:10:39.088 12212.331 - 12264.970: 82.2860% ( 34) 00:10:39.088 12264.970 - 12317.610: 82.4643% ( 21) 00:10:39.088 12317.610 - 12370.249: 82.6427% ( 21) 00:10:39.088 12370.249 - 12422.888: 82.8295% ( 22) 00:10:39.088 12422.888 - 12475.528: 83.0588% ( 27) 00:10:39.088 12475.528 - 12528.167: 83.3560% ( 35) 00:10:39.088 12528.167 - 12580.806: 83.6192% ( 31) 00:10:39.088 12580.806 - 12633.446: 83.8400% ( 26) 00:10:39.088 12633.446 - 12686.085: 84.0353% ( 23) 00:10:39.088 12686.085 - 12738.724: 84.1882% ( 18) 00:10:39.088 12738.724 - 12791.364: 84.3410% ( 18) 00:10:39.088 12791.364 - 12844.003: 84.5024% ( 19) 00:10:39.088 12844.003 - 12896.643: 84.6213% ( 14) 00:10:39.088 12896.643 - 12949.282: 84.7486% ( 15) 00:10:39.088 12949.282 - 13001.921: 84.8760% ( 15) 00:10:39.088 13001.921 - 13054.561: 84.9779% ( 12) 00:10:39.088 13054.561 - 13107.200: 85.1308% ( 18) 00:10:39.088 13107.200 - 13159.839: 85.2666% ( 16) 00:10:39.088 13159.839 - 13212.479: 85.4110% ( 17) 00:10:39.088 13212.479 - 13265.118: 85.5214% ( 13) 00:10:39.088 13265.118 - 13317.757: 85.6573% ( 16) 00:10:39.088 13317.757 - 13370.397: 85.7677% ( 13) 00:10:39.088 13370.397 - 13423.036: 85.8781% ( 13) 00:10:39.088 13423.036 - 13475.676: 85.9715% ( 11) 00:10:39.089 13475.676 - 13580.954: 86.3281% ( 42) 00:10:39.089 13580.954 - 13686.233: 86.9141% ( 69) 00:10:39.089 13686.233 - 13791.512: 87.5934% ( 80) 00:10:39.089 13791.512 - 13896.790: 88.1369% ( 64) 00:10:39.089 13896.790 - 14002.069: 88.6294% ( 58) 00:10:39.089 14002.069 - 14107.348: 89.0880% ( 54) 00:10:39.089 14107.348 - 14212.627: 89.4701% ( 45) 00:10:39.089 14212.627 - 14317.905: 89.8777% ( 48) 00:10:39.089 14317.905 - 14423.184: 90.2683% ( 46) 00:10:39.089 14423.184 - 14528.463: 90.5571% ( 34) 00:10:39.089 14528.463 - 14633.741: 90.8203% ( 31) 00:10:39.089 14633.741 - 14739.020: 91.1345% ( 37) 00:10:39.089 14739.020 - 14844.299: 91.7289% ( 70) 00:10:39.089 14844.299 - 14949.578: 92.3064% ( 68) 00:10:39.089 14949.578 - 15054.856: 92.8838% ( 68) 00:10:39.089 15054.856 - 15160.135: 93.2405% ( 42) 00:10:39.089 15160.135 - 15265.414: 93.5547% ( 37) 00:10:39.089 15265.414 - 15370.692: 93.8944% ( 40) 00:10:39.089 15370.692 - 15475.971: 94.3784% ( 57) 00:10:39.089 15475.971 - 15581.250: 94.5992% ( 26) 00:10:39.089 15581.250 - 15686.529: 94.8624% ( 31) 00:10:39.089 15686.529 - 15791.807: 95.0747% ( 25) 00:10:39.089 15791.807 - 15897.086: 95.1596% ( 10) 00:10:39.089 15897.086 - 16002.365: 95.2191% ( 7) 00:10:39.089 16002.365 - 16107.643: 95.3040% ( 10) 00:10:39.089 16107.643 - 16212.922: 95.3974% ( 11) 00:10:39.089 16212.922 - 16318.201: 95.4908% ( 11) 00:10:39.089 16318.201 - 16423.480: 95.5673% ( 9) 00:10:39.089 16423.480 - 16528.758: 95.6097% ( 5) 00:10:39.089 16528.758 - 16634.037: 95.7201% ( 13) 00:10:39.089 16634.037 - 16739.316: 95.8984% ( 21) 00:10:39.089 16739.316 - 16844.594: 96.2296% ( 39) 00:10:39.089 16844.594 - 16949.873: 96.6118% ( 45) 00:10:39.089 16949.873 - 17055.152: 96.7137% ( 12) 00:10:39.089 17055.152 - 17160.431: 96.7391% ( 3) 00:10:39.089 17265.709 - 17370.988: 96.7476% ( 1) 00:10:39.089 17370.988 - 17476.267: 96.7561% ( 1) 00:10:39.089 17476.267 - 17581.545: 96.8410% ( 10) 00:10:39.089 17581.545 - 17686.824: 97.0194% ( 21) 00:10:39.089 17686.824 - 17792.103: 97.4949% ( 56) 00:10:39.089 17792.103 - 17897.382: 97.7242% ( 27) 00:10:39.089 17897.382 - 18002.660: 97.9280% ( 24) 00:10:39.089 18002.660 - 18107.939: 98.0639% ( 16) 00:10:39.089 18107.939 - 18213.218: 98.1148% ( 6) 00:10:39.089 18213.218 - 18318.496: 98.1403% ( 3) 00:10:39.089 18318.496 - 18423.775: 98.2082% ( 8) 00:10:39.089 18423.775 - 18529.054: 98.2931% ( 10) 00:10:39.089 18529.054 - 18634.333: 98.6158% ( 38) 00:10:39.089 18634.333 - 18739.611: 98.7177% ( 12) 00:10:39.089 18739.611 - 18844.890: 98.7857% ( 8) 00:10:39.089 18844.890 - 18950.169: 98.8536% ( 8) 00:10:39.089 18950.169 - 19055.447: 98.8791% ( 3) 00:10:39.089 19055.447 - 19160.726: 98.9130% ( 4) 00:10:39.089 19476.562 - 19581.841: 98.9300% ( 2) 00:10:39.089 19581.841 - 19687.120: 98.9640% ( 4) 00:10:39.089 19687.120 - 19792.398: 98.9980% ( 4) 00:10:39.089 19792.398 - 19897.677: 99.0234% ( 3) 00:10:39.089 19897.677 - 20002.956: 99.0574% ( 4) 00:10:39.089 20002.956 - 20108.235: 99.0829% ( 3) 00:10:39.089 20108.235 - 20213.513: 99.1084% ( 3) 00:10:39.089 20213.513 - 20318.792: 99.1423% ( 4) 00:10:39.089 20318.792 - 20424.071: 99.1763% ( 4) 00:10:39.089 20424.071 - 20529.349: 99.2103% ( 4) 00:10:39.089 20529.349 - 20634.628: 99.2442% ( 4) 00:10:39.089 20634.628 - 20739.907: 99.2697% ( 3) 00:10:39.089 20739.907 - 20845.186: 99.3037% ( 4) 00:10:39.089 20845.186 - 20950.464: 99.3376% ( 4) 00:10:39.089 20950.464 - 21055.743: 99.3716% ( 4) 00:10:39.089 21055.743 - 21161.022: 99.4056% ( 4) 00:10:39.089 21161.022 - 21266.300: 99.4310% ( 3) 00:10:39.089 21266.300 - 21371.579: 99.4565% ( 3) 00:10:39.089 27372.466 - 27583.023: 99.4650% ( 1) 00:10:39.089 27583.023 - 27793.581: 99.5245% ( 7) 00:10:39.089 27793.581 - 28004.138: 99.5924% ( 8) 00:10:39.089 28004.138 - 28214.696: 99.6518% ( 7) 00:10:39.089 28214.696 - 28425.253: 99.7113% ( 7) 00:10:39.089 28425.253 - 28635.810: 99.7707% ( 7) 00:10:39.089 28635.810 - 28846.368: 99.8302% ( 7) 00:10:39.089 28846.368 - 29056.925: 99.8896% ( 7) 00:10:39.089 29056.925 - 29267.483: 99.9490% ( 7) 00:10:39.089 29267.483 - 29478.040: 100.0000% ( 6) 00:10:39.089 00:10:39.089 14:16:03 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:39.089 00:10:39.089 real 0m2.691s 00:10:39.089 user 0m2.288s 00:10:39.089 sys 0m0.304s 00:10:39.089 14:16:03 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.089 14:16:03 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:39.089 ************************************ 00:10:39.089 END TEST nvme_perf 00:10:39.089 ************************************ 00:10:39.089 14:16:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:39.089 14:16:03 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:39.089 14:16:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.089 14:16:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.089 ************************************ 00:10:39.089 START TEST nvme_hello_world 00:10:39.089 ************************************ 00:10:39.089 14:16:03 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:39.348 Initializing NVMe Controllers 00:10:39.348 Attached to 0000:00:10.0 00:10:39.348 Namespace ID: 1 size: 6GB 00:10:39.348 Attached to 0000:00:11.0 00:10:39.348 Namespace ID: 1 size: 5GB 00:10:39.348 Attached to 0000:00:13.0 00:10:39.348 Namespace ID: 1 size: 1GB 00:10:39.348 Attached to 0000:00:12.0 00:10:39.348 Namespace ID: 1 size: 4GB 00:10:39.348 Namespace ID: 2 size: 4GB 00:10:39.348 Namespace ID: 3 size: 4GB 00:10:39.348 Initialization complete. 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.348 INFO: using host memory buffer for IO 00:10:39.348 Hello world! 00:10:39.608 ************************************ 00:10:39.608 END TEST nvme_hello_world 00:10:39.608 ************************************ 00:10:39.608 00:10:39.608 real 0m0.319s 00:10:39.608 user 0m0.118s 00:10:39.608 sys 0m0.159s 00:10:39.608 14:16:04 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.608 14:16:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:39.608 14:16:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:39.608 14:16:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.608 14:16:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.608 14:16:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.608 ************************************ 00:10:39.608 START TEST nvme_sgl 00:10:39.608 ************************************ 00:10:39.608 14:16:04 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:39.869 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:39.869 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:39.869 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:39.869 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:39.869 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:39.869 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:39.869 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:39.869 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:39.869 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:39.869 NVMe Readv/Writev Request test 00:10:39.869 Attached to 0000:00:10.0 00:10:39.869 Attached to 0000:00:11.0 00:10:39.869 Attached to 0000:00:13.0 00:10:39.869 Attached to 0000:00:12.0 00:10:39.869 0000:00:10.0: build_io_request_2 test passed 00:10:39.869 0000:00:10.0: build_io_request_4 test passed 00:10:39.869 0000:00:10.0: build_io_request_5 test passed 00:10:39.869 0000:00:10.0: build_io_request_6 test passed 00:10:39.869 0000:00:10.0: build_io_request_7 test passed 00:10:39.869 0000:00:10.0: build_io_request_10 test passed 00:10:39.869 0000:00:11.0: build_io_request_2 test passed 00:10:39.869 0000:00:11.0: build_io_request_4 test passed 00:10:39.869 0000:00:11.0: build_io_request_5 test passed 00:10:39.869 0000:00:11.0: build_io_request_6 test passed 00:10:39.869 0000:00:11.0: build_io_request_7 test passed 00:10:39.869 0000:00:11.0: build_io_request_10 test passed 00:10:39.869 Cleaning up... 00:10:39.869 00:10:39.869 real 0m0.379s 00:10:39.869 user 0m0.174s 00:10:39.869 sys 0m0.155s 00:10:39.869 14:16:04 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.869 14:16:04 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 ************************************ 00:10:39.869 END TEST nvme_sgl 00:10:39.869 ************************************ 00:10:40.128 14:16:04 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:40.128 14:16:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.128 14:16:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.128 14:16:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.128 ************************************ 00:10:40.128 START TEST nvme_e2edp 00:10:40.128 ************************************ 00:10:40.128 14:16:04 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:40.388 NVMe Write/Read with End-to-End data protection test 00:10:40.388 Attached to 0000:00:10.0 00:10:40.388 Attached to 0000:00:11.0 00:10:40.388 Attached to 0000:00:13.0 00:10:40.388 Attached to 0000:00:12.0 00:10:40.388 Cleaning up... 00:10:40.388 ************************************ 00:10:40.388 END TEST nvme_e2edp 00:10:40.388 ************************************ 00:10:40.388 00:10:40.388 real 0m0.295s 00:10:40.388 user 0m0.081s 00:10:40.388 sys 0m0.166s 00:10:40.388 14:16:05 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.388 14:16:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:40.388 14:16:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:40.388 14:16:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.388 14:16:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.388 14:16:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.388 ************************************ 00:10:40.388 START TEST nvme_reserve 00:10:40.388 ************************************ 00:10:40.388 14:16:05 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:40.647 ===================================================== 00:10:40.647 NVMe Controller at PCI bus 0, device 16, function 0 00:10:40.647 ===================================================== 00:10:40.647 Reservations: Not Supported 00:10:40.647 ===================================================== 00:10:40.647 NVMe Controller at PCI bus 0, device 17, function 0 00:10:40.647 ===================================================== 00:10:40.647 Reservations: Not Supported 00:10:40.647 ===================================================== 00:10:40.647 NVMe Controller at PCI bus 0, device 19, function 0 00:10:40.647 ===================================================== 00:10:40.647 Reservations: Not Supported 00:10:40.647 ===================================================== 00:10:40.647 NVMe Controller at PCI bus 0, device 18, function 0 00:10:40.647 ===================================================== 00:10:40.647 Reservations: Not Supported 00:10:40.647 Reservation test passed 00:10:40.647 00:10:40.647 real 0m0.292s 00:10:40.647 user 0m0.093s 00:10:40.647 sys 0m0.153s 00:10:40.647 ************************************ 00:10:40.647 END TEST nvme_reserve 00:10:40.647 ************************************ 00:10:40.647 14:16:05 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.647 14:16:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:40.647 14:16:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:40.647 14:16:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.647 14:16:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.647 14:16:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.907 ************************************ 00:10:40.907 START TEST nvme_err_injection 00:10:40.907 ************************************ 00:10:40.907 14:16:05 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:41.166 NVMe Error Injection test 00:10:41.166 Attached to 0000:00:10.0 00:10:41.166 Attached to 0000:00:11.0 00:10:41.166 Attached to 0000:00:13.0 00:10:41.166 Attached to 0000:00:12.0 00:10:41.166 0000:00:10.0: get features failed as expected 00:10:41.166 0000:00:11.0: get features failed as expected 00:10:41.166 0000:00:13.0: get features failed as expected 00:10:41.166 0000:00:12.0: get features failed as expected 00:10:41.166 0000:00:10.0: get features successfully as expected 00:10:41.166 0000:00:11.0: get features successfully as expected 00:10:41.166 0000:00:13.0: get features successfully as expected 00:10:41.166 0000:00:12.0: get features successfully as expected 00:10:41.166 0000:00:10.0: read failed as expected 00:10:41.166 0000:00:11.0: read failed as expected 00:10:41.166 0000:00:13.0: read failed as expected 00:10:41.166 0000:00:12.0: read failed as expected 00:10:41.166 0000:00:10.0: read successfully as expected 00:10:41.166 0000:00:11.0: read successfully as expected 00:10:41.166 0000:00:13.0: read successfully as expected 00:10:41.166 0000:00:12.0: read successfully as expected 00:10:41.166 Cleaning up... 00:10:41.166 00:10:41.166 real 0m0.309s 00:10:41.166 user 0m0.108s 00:10:41.166 sys 0m0.153s 00:10:41.166 14:16:05 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.166 14:16:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:41.166 ************************************ 00:10:41.166 END TEST nvme_err_injection 00:10:41.166 ************************************ 00:10:41.166 14:16:05 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:41.166 14:16:05 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:10:41.166 14:16:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.166 14:16:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.166 ************************************ 00:10:41.166 START TEST nvme_overhead 00:10:41.166 ************************************ 00:10:41.166 14:16:05 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:42.545 Initializing NVMe Controllers 00:10:42.545 Attached to 0000:00:10.0 00:10:42.545 Attached to 0000:00:11.0 00:10:42.545 Attached to 0000:00:13.0 00:10:42.545 Attached to 0000:00:12.0 00:10:42.545 Initialization complete. Launching workers. 00:10:42.545 submit (in ns) avg, min, max = 14204.1, 11230.5, 103012.9 00:10:42.545 complete (in ns) avg, min, max = 8135.2, 7719.7, 65024.9 00:10:42.545 00:10:42.545 Submit histogram 00:10:42.545 ================ 00:10:42.545 Range in us Cumulative Count 00:10:42.545 11.206 - 11.258: 0.0172% ( 1) 00:10:42.545 11.566 - 11.618: 0.0344% ( 1) 00:10:42.545 11.926 - 11.978: 0.0516% ( 1) 00:10:42.545 12.286 - 12.337: 0.0688% ( 1) 00:10:42.545 12.337 - 12.389: 0.1032% ( 2) 00:10:42.545 13.108 - 13.160: 0.1204% ( 1) 00:10:42.545 13.160 - 13.263: 0.3096% ( 11) 00:10:42.545 13.263 - 13.365: 1.9264% ( 94) 00:10:42.545 13.365 - 13.468: 6.6047% ( 272) 00:10:42.545 13.468 - 13.571: 15.6519% ( 526) 00:10:42.545 13.571 - 13.674: 27.3306% ( 679) 00:10:42.545 13.674 - 13.777: 42.4149% ( 877) 00:10:42.545 13.777 - 13.880: 56.0200% ( 791) 00:10:42.545 13.880 - 13.982: 68.2491% ( 711) 00:10:42.545 13.982 - 14.085: 77.6746% ( 548) 00:10:42.545 14.085 - 14.188: 84.3997% ( 391) 00:10:42.545 14.188 - 14.291: 88.6653% ( 248) 00:10:42.545 14.291 - 14.394: 91.6065% ( 171) 00:10:42.545 14.394 - 14.496: 93.1717% ( 91) 00:10:42.545 14.496 - 14.599: 94.0144% ( 49) 00:10:42.545 14.599 - 14.702: 94.4272% ( 24) 00:10:42.545 14.702 - 14.805: 94.5476% ( 7) 00:10:42.545 14.805 - 14.908: 94.5992% ( 3) 00:10:42.545 14.908 - 15.010: 94.6680% ( 4) 00:10:42.545 15.010 - 15.113: 94.7024% ( 2) 00:10:42.545 15.524 - 15.627: 94.7196% ( 1) 00:10:42.545 15.627 - 15.730: 94.7540% ( 2) 00:10:42.545 15.730 - 15.833: 94.7712% ( 1) 00:10:42.545 15.936 - 16.039: 94.7884% ( 1) 00:10:42.545 16.244 - 16.347: 94.8056% ( 1) 00:10:42.545 16.347 - 16.450: 94.8228% ( 1) 00:10:42.545 16.450 - 16.553: 94.8400% ( 1) 00:10:42.545 16.553 - 16.655: 94.8572% ( 1) 00:10:42.545 17.169 - 17.272: 94.8744% ( 1) 00:10:42.545 17.375 - 17.478: 94.8916% ( 1) 00:10:42.545 17.478 - 17.581: 94.9260% ( 2) 00:10:42.545 17.684 - 17.786: 94.9776% ( 3) 00:10:42.545 17.786 - 17.889: 95.0120% ( 2) 00:10:42.545 17.889 - 17.992: 95.0292% ( 1) 00:10:42.545 17.992 - 18.095: 95.1324% ( 6) 00:10:42.545 18.095 - 18.198: 95.3044% ( 10) 00:10:42.545 18.198 - 18.300: 95.3732% ( 4) 00:10:42.545 18.300 - 18.403: 95.5968% ( 13) 00:10:42.545 18.403 - 18.506: 95.8376% ( 14) 00:10:42.545 18.506 - 18.609: 96.1644% ( 19) 00:10:42.545 18.609 - 18.712: 96.3708% ( 12) 00:10:42.545 18.712 - 18.814: 96.5772% ( 12) 00:10:42.545 18.814 - 18.917: 96.8180% ( 14) 00:10:42.545 18.917 - 19.020: 97.0416% ( 13) 00:10:42.545 19.020 - 19.123: 97.2480% ( 12) 00:10:42.545 19.123 - 19.226: 97.4372% ( 11) 00:10:42.545 19.226 - 19.329: 97.6092% ( 10) 00:10:42.545 19.329 - 19.431: 97.7640% ( 9) 00:10:42.545 19.431 - 19.534: 97.9876% ( 13) 00:10:42.545 19.534 - 19.637: 98.1080% ( 7) 00:10:42.545 19.637 - 19.740: 98.2456% ( 8) 00:10:42.545 19.740 - 19.843: 98.2972% ( 3) 00:10:42.545 19.843 - 19.945: 98.4004% ( 6) 00:10:42.545 19.945 - 20.048: 98.4692% ( 4) 00:10:42.545 20.048 - 20.151: 98.5724% ( 6) 00:10:42.545 20.151 - 20.254: 98.6756% ( 6) 00:10:42.545 20.254 - 20.357: 98.6928% ( 1) 00:10:42.545 20.357 - 20.459: 98.7444% ( 3) 00:10:42.545 20.562 - 20.665: 98.8648% ( 7) 00:10:42.545 20.871 - 20.973: 98.9336% ( 4) 00:10:42.545 21.076 - 21.179: 98.9852% ( 3) 00:10:42.545 21.179 - 21.282: 99.0196% ( 2) 00:10:42.545 21.282 - 21.385: 99.1572% ( 8) 00:10:42.545 21.385 - 21.488: 99.1744% ( 1) 00:10:42.545 21.488 - 21.590: 99.2088% ( 2) 00:10:42.545 21.590 - 21.693: 99.2604% ( 3) 00:10:42.545 21.693 - 21.796: 99.3292% ( 4) 00:10:42.545 21.796 - 21.899: 99.4324% ( 6) 00:10:42.545 21.899 - 22.002: 99.4668% ( 2) 00:10:42.545 22.002 - 22.104: 99.5012% ( 2) 00:10:42.545 22.104 - 22.207: 99.5356% ( 2) 00:10:42.545 22.207 - 22.310: 99.5528% ( 1) 00:10:42.545 22.310 - 22.413: 99.5872% ( 2) 00:10:42.545 22.413 - 22.516: 99.6044% ( 1) 00:10:42.545 22.618 - 22.721: 99.6216% ( 1) 00:10:42.545 24.161 - 24.263: 99.6388% ( 1) 00:10:42.545 24.263 - 24.366: 99.6560% ( 1) 00:10:42.545 24.572 - 24.675: 99.6732% ( 1) 00:10:42.545 24.983 - 25.086: 99.7076% ( 2) 00:10:42.545 25.086 - 25.189: 99.7248% ( 1) 00:10:42.545 25.394 - 25.497: 99.7420% ( 1) 00:10:42.545 27.142 - 27.348: 99.7764% ( 2) 00:10:42.545 27.965 - 28.170: 99.7936% ( 1) 00:10:42.545 30.227 - 30.432: 99.8108% ( 1) 00:10:42.545 37.218 - 37.423: 99.8280% ( 1) 00:10:42.545 37.835 - 38.040: 99.8452% ( 1) 00:10:42.545 38.451 - 38.657: 99.8624% ( 1) 00:10:42.545 41.536 - 41.741: 99.8796% ( 1) 00:10:42.545 41.947 - 42.153: 99.8968% ( 1) 00:10:42.545 45.648 - 45.854: 99.9140% ( 1) 00:10:42.545 48.938 - 49.144: 99.9312% ( 1) 00:10:42.545 50.994 - 51.200: 99.9484% ( 1) 00:10:42.545 59.631 - 60.042: 99.9656% ( 1) 00:10:42.545 74.024 - 74.435: 99.9828% ( 1) 00:10:42.545 102.811 - 103.222: 100.0000% ( 1) 00:10:42.545 00:10:42.545 Complete histogram 00:10:42.545 ================== 00:10:42.545 Range in us Cumulative Count 00:10:42.545 7.711 - 7.762: 0.1204% ( 7) 00:10:42.545 7.762 - 7.814: 2.8896% ( 161) 00:10:42.545 7.814 - 7.865: 16.4947% ( 791) 00:10:42.545 7.865 - 7.916: 44.9604% ( 1655) 00:10:42.545 7.916 - 7.968: 64.6887% ( 1147) 00:10:42.545 7.968 - 8.019: 74.9570% ( 597) 00:10:42.545 8.019 - 8.071: 81.0630% ( 355) 00:10:42.545 8.071 - 8.122: 86.5841% ( 321) 00:10:42.545 8.122 - 8.173: 90.4713% ( 226) 00:10:42.545 8.173 - 8.225: 93.1717% ( 157) 00:10:42.545 8.225 - 8.276: 94.4100% ( 72) 00:10:42.545 8.276 - 8.328: 94.9948% ( 34) 00:10:42.545 8.328 - 8.379: 95.6484% ( 38) 00:10:42.545 8.379 - 8.431: 96.0096% ( 21) 00:10:42.545 8.431 - 8.482: 96.2160% ( 12) 00:10:42.545 8.482 - 8.533: 96.4224% ( 12) 00:10:42.545 8.533 - 8.585: 96.5600% ( 8) 00:10:42.545 8.585 - 8.636: 96.7492% ( 11) 00:10:42.545 8.636 - 8.688: 96.9900% ( 14) 00:10:42.545 8.688 - 8.739: 97.3512% ( 21) 00:10:42.546 8.739 - 8.790: 97.5920% ( 14) 00:10:42.546 8.790 - 8.842: 97.7124% ( 7) 00:10:42.546 8.842 - 8.893: 97.8672% ( 9) 00:10:42.546 8.893 - 8.945: 97.9532% ( 5) 00:10:42.546 8.945 - 8.996: 97.9704% ( 1) 00:10:42.546 8.996 - 9.047: 98.0392% ( 4) 00:10:42.546 9.047 - 9.099: 98.0908% ( 3) 00:10:42.546 9.099 - 9.150: 98.1252% ( 2) 00:10:42.546 9.253 - 9.304: 98.1424% ( 1) 00:10:42.546 9.304 - 9.356: 98.1596% ( 1) 00:10:42.546 9.356 - 9.407: 98.1768% ( 1) 00:10:42.546 9.407 - 9.459: 98.2112% ( 2) 00:10:42.546 9.459 - 9.510: 98.2284% ( 1) 00:10:42.546 9.613 - 9.664: 98.2456% ( 1) 00:10:42.546 9.664 - 9.716: 98.2800% ( 2) 00:10:42.546 9.767 - 9.818: 98.3144% ( 2) 00:10:42.546 9.870 - 9.921: 98.3316% ( 1) 00:10:42.546 10.076 - 10.127: 98.3488% ( 1) 00:10:42.546 10.281 - 10.333: 98.3660% ( 1) 00:10:42.546 11.258 - 11.309: 98.3832% ( 1) 00:10:42.546 11.412 - 11.463: 98.4004% ( 1) 00:10:42.546 11.823 - 11.875: 98.4176% ( 1) 00:10:42.546 11.875 - 11.926: 98.4348% ( 1) 00:10:42.546 11.926 - 11.978: 98.4520% ( 1) 00:10:42.546 12.492 - 12.543: 98.4692% ( 1) 00:10:42.546 12.646 - 12.697: 98.4864% ( 1) 00:10:42.546 12.749 - 12.800: 98.5208% ( 2) 00:10:42.546 13.160 - 13.263: 98.5552% ( 2) 00:10:42.546 13.263 - 13.365: 98.6412% ( 5) 00:10:42.546 13.365 - 13.468: 98.8132% ( 10) 00:10:42.546 13.468 - 13.571: 98.8648% ( 3) 00:10:42.546 13.571 - 13.674: 98.9336% ( 4) 00:10:42.546 13.674 - 13.777: 99.0540% ( 7) 00:10:42.546 13.777 - 13.880: 99.1228% ( 4) 00:10:42.546 13.880 - 13.982: 99.1916% ( 4) 00:10:42.546 13.982 - 14.085: 99.2260% ( 2) 00:10:42.546 14.085 - 14.188: 99.2948% ( 4) 00:10:42.546 14.188 - 14.291: 99.3464% ( 3) 00:10:42.546 14.291 - 14.394: 99.3808% ( 2) 00:10:42.546 14.394 - 14.496: 99.3980% ( 1) 00:10:42.546 14.496 - 14.599: 99.4152% ( 1) 00:10:42.546 14.599 - 14.702: 99.4324% ( 1) 00:10:42.546 14.702 - 14.805: 99.5012% ( 4) 00:10:42.546 14.908 - 15.010: 99.5184% ( 1) 00:10:42.546 15.010 - 15.113: 99.5528% ( 2) 00:10:42.546 15.216 - 15.319: 99.5872% ( 2) 00:10:42.546 16.450 - 16.553: 99.6044% ( 1) 00:10:42.546 17.684 - 17.786: 99.6216% ( 1) 00:10:42.546 18.506 - 18.609: 99.6388% ( 1) 00:10:42.546 18.609 - 18.712: 99.6560% ( 1) 00:10:42.546 19.020 - 19.123: 99.6732% ( 1) 00:10:42.546 19.329 - 19.431: 99.6904% ( 1) 00:10:42.546 19.431 - 19.534: 99.7076% ( 1) 00:10:42.546 19.637 - 19.740: 99.7248% ( 1) 00:10:42.546 19.945 - 20.048: 99.7592% ( 2) 00:10:42.546 21.076 - 21.179: 99.7764% ( 1) 00:10:42.546 22.002 - 22.104: 99.7936% ( 1) 00:10:42.546 23.030 - 23.133: 99.8108% ( 1) 00:10:42.546 24.572 - 24.675: 99.8280% ( 1) 00:10:42.546 31.049 - 31.255: 99.8452% ( 1) 00:10:42.546 33.928 - 34.133: 99.8624% ( 1) 00:10:42.546 36.190 - 36.395: 99.8796% ( 1) 00:10:42.546 38.657 - 38.863: 99.8968% ( 1) 00:10:42.546 39.480 - 39.685: 99.9140% ( 1) 00:10:42.546 40.508 - 40.713: 99.9312% ( 1) 00:10:42.546 45.443 - 45.648: 99.9484% ( 1) 00:10:42.546 50.583 - 50.789: 99.9656% ( 1) 00:10:42.546 55.107 - 55.518: 99.9828% ( 1) 00:10:42.546 64.977 - 65.388: 100.0000% ( 1) 00:10:42.546 00:10:42.546 ************************************ 00:10:42.546 END TEST nvme_overhead 00:10:42.546 ************************************ 00:10:42.546 00:10:42.546 real 0m1.320s 00:10:42.546 user 0m1.112s 00:10:42.546 sys 0m0.159s 00:10:42.546 14:16:07 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.546 14:16:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:42.546 14:16:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:42.546 14:16:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:42.546 14:16:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.546 14:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.546 ************************************ 00:10:42.546 START TEST nvme_arbitration 00:10:42.546 ************************************ 00:10:42.546 14:16:07 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:46.741 Initializing NVMe Controllers 00:10:46.741 Attached to 0000:00:10.0 00:10:46.741 Attached to 0000:00:11.0 00:10:46.741 Attached to 0000:00:13.0 00:10:46.741 Attached to 0000:00:12.0 00:10:46.741 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:46.741 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:46.741 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:46.741 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:46.741 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:46.741 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:46.741 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:46.741 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:46.741 Initialization complete. Launching workers. 00:10:46.741 Starting thread on core 1 with urgent priority queue 00:10:46.741 Starting thread on core 2 with urgent priority queue 00:10:46.741 Starting thread on core 3 with urgent priority queue 00:10:46.741 Starting thread on core 0 with urgent priority queue 00:10:46.741 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:46.741 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:46.741 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:10:46.741 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:10:46.741 QEMU NVMe Ctrl (12343 ) core 2: 810.67 IO/s 123.36 secs/100000 ios 00:10:46.741 QEMU NVMe Ctrl (12342 ) core 3: 426.67 IO/s 234.38 secs/100000 ios 00:10:46.741 ======================================================== 00:10:46.741 00:10:46.741 00:10:46.741 real 0m3.445s 00:10:46.741 user 0m9.392s 00:10:46.742 sys 0m0.187s 00:10:46.742 14:16:10 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.742 ************************************ 00:10:46.742 END TEST nvme_arbitration 00:10:46.742 ************************************ 00:10:46.742 14:16:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 14:16:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:46.742 14:16:10 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.742 14:16:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.742 14:16:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 ************************************ 00:10:46.742 START TEST nvme_single_aen 00:10:46.742 ************************************ 00:10:46.742 14:16:10 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:46.742 Asynchronous Event Request test 00:10:46.742 Attached to 0000:00:10.0 00:10:46.742 Attached to 0000:00:11.0 00:10:46.742 Attached to 0000:00:13.0 00:10:46.742 Attached to 0000:00:12.0 00:10:46.742 Reset controller to setup AER completions for this process 00:10:46.742 Registering asynchronous event callbacks... 00:10:46.742 Getting orig temperature thresholds of all controllers 00:10:46.742 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.742 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.742 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.742 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.742 Setting all controllers temperature threshold low to trigger AER 00:10:46.742 Waiting for all controllers temperature threshold to be set lower 00:10:46.742 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.742 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:46.742 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.742 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:46.742 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.742 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:46.742 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.742 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:46.742 Waiting for all controllers to trigger AER and reset threshold 00:10:46.742 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.742 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.742 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.742 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.742 Cleaning up... 00:10:46.742 00:10:46.742 real 0m0.327s 00:10:46.742 user 0m0.117s 00:10:46.742 sys 0m0.167s 00:10:46.742 ************************************ 00:10:46.742 END TEST nvme_single_aen 00:10:46.742 ************************************ 00:10:46.742 14:16:11 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.742 14:16:11 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 14:16:11 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:46.742 14:16:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.742 14:16:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.742 14:16:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 ************************************ 00:10:46.742 START TEST nvme_doorbell_aers 00:10:46.742 ************************************ 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:46.742 14:16:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:47.000 [2024-12-10 14:16:11.581463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:10:56.970 Executing: test_write_invalid_db 00:10:56.970 Waiting for AER completion... 00:10:56.970 Failure: test_write_invalid_db 00:10:56.970 00:10:56.970 Executing: test_invalid_db_write_overflow_sq 00:10:56.970 Waiting for AER completion... 00:10:56.970 Failure: test_invalid_db_write_overflow_sq 00:10:56.970 00:10:56.970 Executing: test_invalid_db_write_overflow_cq 00:10:56.970 Waiting for AER completion... 00:10:56.970 Failure: test_invalid_db_write_overflow_cq 00:10:56.970 00:10:56.970 14:16:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:56.970 14:16:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:56.970 [2024-12-10 14:16:21.662554] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:06.941 Executing: test_write_invalid_db 00:11:06.941 Waiting for AER completion... 00:11:06.941 Failure: test_write_invalid_db 00:11:06.941 00:11:06.941 Executing: test_invalid_db_write_overflow_sq 00:11:06.941 Waiting for AER completion... 00:11:06.941 Failure: test_invalid_db_write_overflow_sq 00:11:06.941 00:11:06.941 Executing: test_invalid_db_write_overflow_cq 00:11:06.941 Waiting for AER completion... 00:11:06.941 Failure: test_invalid_db_write_overflow_cq 00:11:06.941 00:11:06.941 14:16:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:06.941 14:16:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:06.941 [2024-12-10 14:16:31.714266] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:16.910 Executing: test_write_invalid_db 00:11:16.910 Waiting for AER completion... 00:11:16.910 Failure: test_write_invalid_db 00:11:16.910 00:11:16.910 Executing: test_invalid_db_write_overflow_sq 00:11:16.910 Waiting for AER completion... 00:11:16.910 Failure: test_invalid_db_write_overflow_sq 00:11:16.910 00:11:16.910 Executing: test_invalid_db_write_overflow_cq 00:11:16.910 Waiting for AER completion... 00:11:16.910 Failure: test_invalid_db_write_overflow_cq 00:11:16.910 00:11:16.910 14:16:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:16.910 14:16:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:17.168 [2024-12-10 14:16:41.768592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 Executing: test_write_invalid_db 00:11:27.140 Waiting for AER completion... 00:11:27.140 Failure: test_write_invalid_db 00:11:27.140 00:11:27.140 Executing: test_invalid_db_write_overflow_sq 00:11:27.140 Waiting for AER completion... 00:11:27.140 Failure: test_invalid_db_write_overflow_sq 00:11:27.140 00:11:27.140 Executing: test_invalid_db_write_overflow_cq 00:11:27.140 Waiting for AER completion... 00:11:27.140 Failure: test_invalid_db_write_overflow_cq 00:11:27.140 00:11:27.140 00:11:27.140 real 0m40.338s 00:11:27.140 user 0m28.710s 00:11:27.140 sys 0m11.247s 00:11:27.140 14:16:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.140 14:16:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:27.140 ************************************ 00:11:27.140 END TEST nvme_doorbell_aers 00:11:27.140 ************************************ 00:11:27.140 14:16:51 nvme -- nvme/nvme.sh@97 -- # uname 00:11:27.140 14:16:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:27.140 14:16:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:27.140 14:16:51 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:27.140 14:16:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.140 14:16:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.140 ************************************ 00:11:27.140 START TEST nvme_multi_aen 00:11:27.140 ************************************ 00:11:27.140 14:16:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:27.140 [2024-12-10 14:16:51.863423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.863525] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.863542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.865523] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.865570] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.865585] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.867277] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.867322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.867341] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.868831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.868871] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 [2024-12-10 14:16:51.868886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65733) is not found. Dropping the request. 00:11:27.140 Child process pid: 66257 00:11:27.399 [Child] Asynchronous Event Request test 00:11:27.399 [Child] Attached to 0000:00:10.0 00:11:27.399 [Child] Attached to 0000:00:11.0 00:11:27.399 [Child] Attached to 0000:00:13.0 00:11:27.399 [Child] Attached to 0000:00:12.0 00:11:27.399 [Child] Registering asynchronous event callbacks... 00:11:27.399 [Child] Getting orig temperature thresholds of all controllers 00:11:27.399 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:27.399 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 [Child] Cleaning up... 00:11:27.399 Asynchronous Event Request test 00:11:27.399 Attached to 0000:00:10.0 00:11:27.399 Attached to 0000:00:11.0 00:11:27.399 Attached to 0000:00:13.0 00:11:27.399 Attached to 0000:00:12.0 00:11:27.399 Reset controller to setup AER completions for this process 00:11:27.399 Registering asynchronous event callbacks... 00:11:27.399 Getting orig temperature thresholds of all controllers 00:11:27.399 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.399 Setting all controllers temperature threshold low to trigger AER 00:11:27.399 Waiting for all controllers temperature threshold to be set lower 00:11:27.399 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:27.399 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:27.399 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:27.399 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.399 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:27.399 Waiting for all controllers to trigger AER and reset threshold 00:11:27.399 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.399 Cleaning up... 00:11:27.658 ************************************ 00:11:27.658 END TEST nvme_multi_aen 00:11:27.658 ************************************ 00:11:27.658 00:11:27.658 real 0m0.639s 00:11:27.658 user 0m0.226s 00:11:27.658 sys 0m0.308s 00:11:27.658 14:16:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.658 14:16:52 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:27.658 14:16:52 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:27.658 14:16:52 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.658 14:16:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.658 14:16:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.658 ************************************ 00:11:27.658 START TEST nvme_startup 00:11:27.658 ************************************ 00:11:27.658 14:16:52 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:27.918 Initializing NVMe Controllers 00:11:27.918 Attached to 0000:00:10.0 00:11:27.918 Attached to 0000:00:11.0 00:11:27.918 Attached to 0000:00:13.0 00:11:27.918 Attached to 0000:00:12.0 00:11:27.918 Initialization complete. 00:11:27.918 Time used:194016.000 (us). 00:11:27.918 ************************************ 00:11:27.918 END TEST nvme_startup 00:11:27.918 ************************************ 00:11:27.918 00:11:27.918 real 0m0.299s 00:11:27.918 user 0m0.105s 00:11:27.918 sys 0m0.148s 00:11:27.918 14:16:52 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.918 14:16:52 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:27.918 14:16:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:27.918 14:16:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.918 14:16:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.918 14:16:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.918 ************************************ 00:11:27.918 START TEST nvme_multi_secondary 00:11:27.918 ************************************ 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66313 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66314 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:27.918 14:16:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:32.108 Initializing NVMe Controllers 00:11:32.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:32.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:32.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:32.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:32.108 Initialization complete. Launching workers. 00:11:32.108 ======================================================== 00:11:32.108 Latency(us) 00:11:32.108 Device Information : IOPS MiB/s Average min max 00:11:32.108 PCIE (0000:00:10.0) NSID 1 from core 2: 3116.78 12.17 5131.23 1161.87 14689.40 00:11:32.108 PCIE (0000:00:11.0) NSID 1 from core 2: 3116.78 12.17 5133.20 1132.92 14892.91 00:11:32.108 PCIE (0000:00:13.0) NSID 1 from core 2: 3116.78 12.17 5133.03 1147.87 14338.85 00:11:32.108 PCIE (0000:00:12.0) NSID 1 from core 2: 3116.78 12.17 5133.54 1160.55 15873.20 00:11:32.108 PCIE (0000:00:12.0) NSID 2 from core 2: 3116.78 12.17 5133.20 1133.63 14285.90 00:11:32.108 PCIE (0000:00:12.0) NSID 3 from core 2: 3116.78 12.17 5130.67 1149.98 14750.56 00:11:32.108 ======================================================== 00:11:32.108 Total : 18700.66 73.05 5132.48 1132.92 15873.20 00:11:32.108 00:11:32.108 14:16:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66313 00:11:32.108 Initializing NVMe Controllers 00:11:32.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:32.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:32.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:32.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:32.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:32.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:32.108 Initialization complete. Launching workers. 00:11:32.108 ======================================================== 00:11:32.108 Latency(us) 00:11:32.108 Device Information : IOPS MiB/s Average min max 00:11:32.108 PCIE (0000:00:10.0) NSID 1 from core 1: 4653.95 18.18 3435.36 1839.43 7757.88 00:11:32.108 PCIE (0000:00:11.0) NSID 1 from core 1: 4653.95 18.18 3437.52 1972.84 7739.56 00:11:32.108 PCIE (0000:00:13.0) NSID 1 from core 1: 4653.95 18.18 3437.66 1987.63 7616.93 00:11:32.108 PCIE (0000:00:12.0) NSID 1 from core 1: 4653.95 18.18 3438.07 1882.70 7788.61 00:11:32.108 PCIE (0000:00:12.0) NSID 2 from core 1: 4653.95 18.18 3438.18 1717.47 7225.18 00:11:32.108 PCIE (0000:00:12.0) NSID 3 from core 1: 4653.95 18.18 3438.31 1872.50 7775.18 00:11:32.108 ======================================================== 00:11:32.108 Total : 27923.70 109.08 3437.52 1717.47 7788.61 00:11:32.108 00:11:33.483 Initializing NVMe Controllers 00:11:33.483 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.483 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.483 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.483 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.483 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:33.483 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:33.483 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:33.483 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:33.483 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:33.483 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:33.483 Initialization complete. Launching workers. 00:11:33.483 ======================================================== 00:11:33.483 Latency(us) 00:11:33.483 Device Information : IOPS MiB/s Average min max 00:11:33.483 PCIE (0000:00:10.0) NSID 1 from core 0: 8123.58 31.73 1968.01 930.18 10166.01 00:11:33.483 PCIE (0000:00:11.0) NSID 1 from core 0: 8123.58 31.73 1969.08 959.59 9480.01 00:11:33.483 PCIE (0000:00:13.0) NSID 1 from core 0: 8123.58 31.73 1969.04 890.57 8527.32 00:11:33.483 PCIE (0000:00:12.0) NSID 1 from core 0: 8123.58 31.73 1969.01 840.54 8234.07 00:11:33.483 PCIE (0000:00:12.0) NSID 2 from core 0: 8123.58 31.73 1968.98 795.47 9816.02 00:11:33.483 PCIE (0000:00:12.0) NSID 3 from core 0: 8126.78 31.75 1968.15 748.18 10563.53 00:11:33.483 ======================================================== 00:11:33.483 Total : 48744.66 190.41 1968.71 748.18 10563.53 00:11:33.483 00:11:33.483 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66314 00:11:33.483 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66383 00:11:33.483 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:33.483 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66384 00:11:33.483 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:33.484 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:36.767 Initializing NVMe Controllers 00:11:36.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:36.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:36.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:36.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:36.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:36.767 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:36.767 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:36.768 Initialization complete. Launching workers. 00:11:36.768 ======================================================== 00:11:36.768 Latency(us) 00:11:36.768 Device Information : IOPS MiB/s Average min max 00:11:36.768 PCIE (0000:00:10.0) NSID 1 from core 0: 5155.30 20.14 3101.27 936.97 7242.21 00:11:36.768 PCIE (0000:00:11.0) NSID 1 from core 0: 5155.30 20.14 3103.23 950.63 7267.00 00:11:36.768 PCIE (0000:00:13.0) NSID 1 from core 0: 5155.30 20.14 3103.82 951.45 7204.43 00:11:36.768 PCIE (0000:00:12.0) NSID 1 from core 0: 5155.30 20.14 3104.31 950.12 7775.05 00:11:36.768 PCIE (0000:00:12.0) NSID 2 from core 0: 5155.30 20.14 3104.55 963.24 6992.59 00:11:36.768 PCIE (0000:00:12.0) NSID 3 from core 0: 5160.63 20.16 3101.48 957.93 7097.74 00:11:36.768 ======================================================== 00:11:36.768 Total : 30937.13 120.85 3103.11 936.97 7775.05 00:11:36.768 00:11:36.768 Initializing NVMe Controllers 00:11:36.768 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:36.768 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:36.768 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:36.768 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:36.768 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:36.768 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:36.768 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:36.768 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:36.768 Initialization complete. Launching workers. 00:11:36.768 ======================================================== 00:11:36.768 Latency(us) 00:11:36.768 Device Information : IOPS MiB/s Average min max 00:11:36.768 PCIE (0000:00:10.0) NSID 1 from core 1: 4872.67 19.03 3281.01 1009.14 8988.53 00:11:36.768 PCIE (0000:00:11.0) NSID 1 from core 1: 4872.67 19.03 3282.73 1020.87 9002.68 00:11:36.768 PCIE (0000:00:13.0) NSID 1 from core 1: 4872.67 19.03 3282.62 1013.26 9203.06 00:11:36.768 PCIE (0000:00:12.0) NSID 1 from core 1: 4872.67 19.03 3282.50 1021.74 9230.95 00:11:36.768 PCIE (0000:00:12.0) NSID 2 from core 1: 4872.67 19.03 3282.42 1001.23 10029.20 00:11:36.768 PCIE (0000:00:12.0) NSID 3 from core 1: 4872.67 19.03 3282.32 830.76 9005.37 00:11:36.768 ======================================================== 00:11:36.768 Total : 29236.00 114.20 3282.27 830.76 10029.20 00:11:36.768 00:11:39.306 Initializing NVMe Controllers 00:11:39.306 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.306 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.306 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.306 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.306 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:39.306 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:39.306 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:39.306 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:39.306 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:39.306 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:39.306 Initialization complete. Launching workers. 00:11:39.306 ======================================================== 00:11:39.306 Latency(us) 00:11:39.306 Device Information : IOPS MiB/s Average min max 00:11:39.306 PCIE (0000:00:10.0) NSID 1 from core 2: 3320.51 12.97 4817.32 1103.17 12300.41 00:11:39.306 PCIE (0000:00:11.0) NSID 1 from core 2: 3320.51 12.97 4818.43 1051.81 11489.82 00:11:39.306 PCIE (0000:00:13.0) NSID 1 from core 2: 3320.51 12.97 4818.12 1158.91 11557.99 00:11:39.306 PCIE (0000:00:12.0) NSID 1 from core 2: 3320.51 12.97 4818.05 1138.26 11650.17 00:11:39.306 PCIE (0000:00:12.0) NSID 2 from core 2: 3320.51 12.97 4818.23 1143.02 12238.91 00:11:39.306 PCIE (0000:00:12.0) NSID 3 from core 2: 3320.51 12.97 4817.91 1141.22 12224.22 00:11:39.306 ======================================================== 00:11:39.306 Total : 19923.03 77.82 4818.01 1051.81 12300.41 00:11:39.306 00:11:39.306 ************************************ 00:11:39.306 END TEST nvme_multi_secondary 00:11:39.306 ************************************ 00:11:39.306 14:17:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66383 00:11:39.306 14:17:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66384 00:11:39.306 00:11:39.306 real 0m11.037s 00:11:39.306 user 0m18.593s 00:11:39.306 sys 0m1.140s 00:11:39.306 14:17:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.306 14:17:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:39.306 14:17:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:39.306 14:17:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:39.306 14:17:03 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/65318 ]] 00:11:39.306 14:17:03 nvme -- common/autotest_common.sh@1094 -- # kill 65318 00:11:39.306 14:17:03 nvme -- common/autotest_common.sh@1095 -- # wait 65318 00:11:39.306 [2024-12-10 14:17:03.797769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.797950] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.798033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.798095] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.805155] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.805587] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.805656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.805706] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.810581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.810656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.810710] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.810743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.815355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.815430] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.815451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.815473] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66256) is not found. Dropping the request. 00:11:39.306 [2024-12-10 14:17:03.980421] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:39.306 14:17:03 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:11:39.306 14:17:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:11:39.306 14:17:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.306 14:17:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.306 14:17:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.306 14:17:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.306 ************************************ 00:11:39.306 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:39.306 ************************************ 00:11:39.306 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.566 * Looking for test storage... 00:11:39.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.566 --rc genhtml_branch_coverage=1 00:11:39.566 --rc genhtml_function_coverage=1 00:11:39.566 --rc genhtml_legend=1 00:11:39.566 --rc geninfo_all_blocks=1 00:11:39.566 --rc geninfo_unexecuted_blocks=1 00:11:39.566 00:11:39.566 ' 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.566 --rc genhtml_branch_coverage=1 00:11:39.566 --rc genhtml_function_coverage=1 00:11:39.566 --rc genhtml_legend=1 00:11:39.566 --rc geninfo_all_blocks=1 00:11:39.566 --rc geninfo_unexecuted_blocks=1 00:11:39.566 00:11:39.566 ' 00:11:39.566 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.566 --rc genhtml_branch_coverage=1 00:11:39.566 --rc genhtml_function_coverage=1 00:11:39.566 --rc genhtml_legend=1 00:11:39.566 --rc geninfo_all_blocks=1 00:11:39.566 --rc geninfo_unexecuted_blocks=1 00:11:39.566 00:11:39.566 ' 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.567 --rc genhtml_branch_coverage=1 00:11:39.567 --rc genhtml_function_coverage=1 00:11:39.567 --rc genhtml_legend=1 00:11:39.567 --rc geninfo_all_blocks=1 00:11:39.567 --rc geninfo_unexecuted_blocks=1 00:11:39.567 00:11:39.567 ' 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66551 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66551 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66551 ']' 00:11:39.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.567 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:39.826 [2024-12-10 14:17:04.494824] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:11:39.826 [2024-12-10 14:17:04.495469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66551 ] 00:11:40.086 [2024-12-10 14:17:04.692641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.086 [2024-12-10 14:17:04.831134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.086 [2024-12-10 14:17:04.831307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.086 [2024-12-10 14:17:04.831461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.086 [2024-12-10 14:17:04.831498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.024 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.024 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:11:41.024 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:41.024 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.024 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:41.283 nvme0n1 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_9WiJV.txt 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:41.283 true 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733840225 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66574 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:41.283 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:43.189 [2024-12-10 14:17:07.941636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:43.189 [2024-12-10 14:17:07.942138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:43.189 [2024-12-10 14:17:07.942276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.189 [2024-12-10 14:17:07.942388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.189 [2024-12-10 14:17:07.944510] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66574 00:11:43.189 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66574 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66574 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:43.189 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_9WiJV.txt 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_9WiJV.txt 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66551 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66551 ']' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66551 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66551 00:11:43.449 killing process with pid 66551 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66551' 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66551 00:11:43.449 14:17:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66551 00:11:46.033 14:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:46.033 14:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:46.033 00:11:46.033 real 0m6.687s 00:11:46.033 user 0m23.105s 00:11:46.033 sys 0m0.985s 00:11:46.033 ************************************ 00:11:46.033 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:46.033 ************************************ 00:11:46.033 14:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.033 14:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:46.033 14:17:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:46.033 14:17:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:46.033 14:17:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.033 14:17:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.033 14:17:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.033 ************************************ 00:11:46.033 START TEST nvme_fio 00:11:46.033 ************************************ 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:46.033 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:46.033 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:46.033 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:46.033 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:46.293 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:46.293 14:17:10 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:46.293 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:46.293 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:46.293 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:46.293 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.293 14:17:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:46.553 14:17:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.553 14:17:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:46.813 14:17:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:46.813 14:17:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:46.813 14:17:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:47.072 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:47.072 fio-3.35 00:11:47.072 Starting 1 thread 00:11:51.260 00:11:51.260 test: (groupid=0, jobs=1): err= 0: pid=66730: Tue Dec 10 14:17:15 2024 00:11:51.260 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:11:51.260 slat (usec): min=3, max=154, avg= 4.64, stdev= 1.47 00:11:51.260 clat (usec): min=176, max=13347, avg=2821.81, stdev=558.27 00:11:51.260 lat (usec): min=180, max=13501, avg=2826.45, stdev=559.04 00:11:51.260 clat percentiles (usec): 00:11:51.260 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:11:51.260 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:51.260 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 3195], 00:11:51.260 | 99.00th=[ 5604], 99.50th=[ 6521], 99.90th=[ 8848], 99.95th=[10683], 00:11:51.260 | 99.99th=[12911] 00:11:51.260 bw ( KiB/s): min=87536, max=91912, per=99.41%, avg=89893.33, stdev=2207.57, samples=3 00:11:51.260 iops : min=21884, max=22978, avg=22473.33, stdev=551.89, samples=3 00:11:51.260 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:11:51.260 slat (nsec): min=3866, max=48600, avg=5108.98, stdev=1282.59 00:11:51.260 clat (usec): min=168, max=13162, avg=2829.96, stdev=552.06 00:11:51.260 lat (usec): min=172, max=13211, avg=2835.07, stdev=552.68 00:11:51.260 clat percentiles (usec): 00:11:51.260 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:51.260 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:51.260 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2966], 95.00th=[ 3195], 00:11:51.260 | 99.00th=[ 5538], 99.50th=[ 6521], 99.90th=[ 9110], 99.95th=[10814], 00:11:51.260 | 99.99th=[12649] 00:11:51.260 bw ( KiB/s): min=87200, max=91624, per=100.00%, avg=90106.67, stdev=2518.06, samples=3 00:11:51.260 iops : min=21800, max=22906, avg=22526.67, stdev=629.52, samples=3 00:11:51.260 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:51.260 lat (msec) : 2=0.13%, 4=97.00%, 10=2.76%, 20=0.07% 00:11:51.260 cpu : usr=99.25%, sys=0.10%, ctx=2, majf=0, minf=609 00:11:51.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:51.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.260 issued rwts: total=45236,44991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.260 00:11:51.260 Run status group 0 (all jobs): 00:11:51.260 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:11:51.260 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:51.260 ----------------------------------------------------- 00:11:51.260 Suppressions used: 00:11:51.260 count bytes template 00:11:51.260 1 32 /usr/src/fio/parse.c 00:11:51.260 1 8 libtcmalloc_minimal.so 00:11:51.260 ----------------------------------------------------- 00:11:51.260 00:11:51.260 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:51.260 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:51.260 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:51.260 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:51.260 14:17:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:51.260 14:17:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:51.520 14:17:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:51.520 14:17:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:51.520 14:17:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.779 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:51.779 fio-3.35 00:11:51.779 Starting 1 thread 00:11:55.970 00:11:55.971 test: (groupid=0, jobs=1): err= 0: pid=66796: Tue Dec 10 14:17:20 2024 00:11:55.971 read: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(182MiB/2001msec) 00:11:55.971 slat (nsec): min=3656, max=97271, avg=4506.12, stdev=1261.82 00:11:55.971 clat (usec): min=240, max=13189, avg=2732.21, stdev=396.46 00:11:55.971 lat (usec): min=244, max=13286, avg=2736.71, stdev=397.06 00:11:55.971 clat percentiles (usec): 00:11:55.971 | 1.00th=[ 2409], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:11:55.971 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:11:55.971 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2966], 00:11:55.971 | 99.00th=[ 4424], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 9896], 00:11:55.971 | 99.99th=[12911] 00:11:55.971 bw ( KiB/s): min=88464, max=95560, per=99.63%, avg=92800.00, stdev=3801.46, samples=3 00:11:55.971 iops : min=22116, max=23890, avg=23200.00, stdev=950.37, samples=3 00:11:55.971 write: IOPS=23.1k, BW=90.4MiB/s (94.7MB/s)(181MiB/2001msec); 0 zone resets 00:11:55.971 slat (nsec): min=3841, max=45477, avg=5018.16, stdev=1149.21 00:11:55.971 clat (usec): min=233, max=13066, avg=2747.79, stdev=410.73 00:11:55.971 lat (usec): min=237, max=13091, avg=2752.81, stdev=411.24 00:11:55.971 clat percentiles (usec): 00:11:55.971 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:11:55.971 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:55.971 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2900], 95.00th=[ 2999], 00:11:55.971 | 99.00th=[ 4490], 99.50th=[ 5407], 99.90th=[ 7570], 99.95th=[10421], 00:11:55.971 | 99.99th=[12518] 00:11:55.971 bw ( KiB/s): min=87840, max=96984, per=100.00%, avg=92874.67, stdev=4641.70, samples=3 00:11:55.971 iops : min=21960, max=24246, avg=23218.67, stdev=1160.42, samples=3 00:11:55.971 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:55.971 lat (msec) : 2=0.19%, 4=98.42%, 10=1.29%, 20=0.05% 00:11:55.971 cpu : usr=99.30%, sys=0.05%, ctx=3, majf=0, minf=608 00:11:55.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:55.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.971 issued rwts: total=46596,46283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.971 00:11:55.971 Run status group 0 (all jobs): 00:11:55.971 READ: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=182MiB (191MB), run=2001-2001msec 00:11:55.971 WRITE: bw=90.4MiB/s (94.7MB/s), 90.4MiB/s-90.4MiB/s (94.7MB/s-94.7MB/s), io=181MiB (190MB), run=2001-2001msec 00:11:55.971 ----------------------------------------------------- 00:11:55.971 Suppressions used: 00:11:55.971 count bytes template 00:11:55.971 1 32 /usr/src/fio/parse.c 00:11:55.971 1 8 libtcmalloc_minimal.so 00:11:55.971 ----------------------------------------------------- 00:11:55.971 00:11:55.971 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:55.971 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:55.971 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:55.971 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:56.230 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:56.230 14:17:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:56.489 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:56.489 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:56.489 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:56.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:56.748 fio-3.35 00:11:56.748 Starting 1 thread 00:12:00.035 00:12:00.035 test: (groupid=0, jobs=1): err= 0: pid=66862: Tue Dec 10 14:17:24 2024 00:12:00.035 read: IOPS=22.2k, BW=86.8MiB/s (91.0MB/s)(174MiB/2001msec) 00:12:00.035 slat (usec): min=4, max=129, avg= 4.91, stdev= 1.19 00:12:00.035 clat (usec): min=197, max=12127, avg=2870.36, stdev=400.43 00:12:00.035 lat (usec): min=203, max=12257, avg=2875.27, stdev=400.85 00:12:00.035 clat percentiles (usec): 00:12:00.035 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2769], 00:12:00.035 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:12:00.035 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3032], 00:12:00.035 | 99.00th=[ 3752], 99.50th=[ 5211], 99.90th=[10290], 99.95th=[10945], 00:12:00.035 | 99.99th=[11863] 00:12:00.035 bw ( KiB/s): min=85344, max=90824, per=99.59%, avg=88541.33, stdev=2852.20, samples=3 00:12:00.035 iops : min=21336, max=22706, avg=22135.33, stdev=713.05, samples=3 00:12:00.035 write: IOPS=22.1k, BW=86.2MiB/s (90.4MB/s)(172MiB/2001msec); 0 zone resets 00:12:00.035 slat (nsec): min=4277, max=55974, avg=5079.87, stdev=1092.06 00:12:00.035 clat (usec): min=286, max=11901, avg=2881.08, stdev=450.06 00:12:00.035 lat (usec): min=291, max=11925, avg=2886.16, stdev=450.40 00:12:00.035 clat percentiles (usec): 00:12:00.035 | 1.00th=[ 2638], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:12:00.035 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:12:00.035 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3064], 00:12:00.035 | 99.00th=[ 4178], 99.50th=[ 5342], 99.90th=[10683], 99.95th=[11207], 00:12:00.035 | 99.99th=[11731] 00:12:00.035 bw ( KiB/s): min=85080, max=91648, per=100.00%, avg=88704.00, stdev=3336.38, samples=3 00:12:00.035 iops : min=21270, max=22912, avg=22176.00, stdev=834.10, samples=3 00:12:00.035 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:00.035 lat (msec) : 2=0.12%, 4=98.87%, 10=0.85%, 20=0.14% 00:12:00.035 cpu : usr=99.25%, sys=0.25%, ctx=5, majf=0, minf=608 00:12:00.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:00.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.035 issued rwts: total=44474,44157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.035 00:12:00.035 Run status group 0 (all jobs): 00:12:00.035 READ: bw=86.8MiB/s (91.0MB/s), 86.8MiB/s-86.8MiB/s (91.0MB/s-91.0MB/s), io=174MiB (182MB), run=2001-2001msec 00:12:00.035 WRITE: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=172MiB (181MB), run=2001-2001msec 00:12:00.035 ----------------------------------------------------- 00:12:00.035 Suppressions used: 00:12:00.035 count bytes template 00:12:00.035 1 32 /usr/src/fio/parse.c 00:12:00.035 1 8 libtcmalloc_minimal.so 00:12:00.035 ----------------------------------------------------- 00:12:00.035 00:12:00.035 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:00.035 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:00.035 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:00.035 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:00.295 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:00.295 14:17:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:00.554 14:17:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:00.554 14:17:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:00.554 14:17:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:00.813 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:00.813 fio-3.35 00:12:00.813 Starting 1 thread 00:12:06.085 00:12:06.085 test: (groupid=0, jobs=1): err= 0: pid=66923: Tue Dec 10 14:17:30 2024 00:12:06.085 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec) 00:12:06.085 slat (nsec): min=3591, max=83158, avg=4079.52, stdev=1020.41 00:12:06.085 clat (usec): min=243, max=11611, avg=2678.56, stdev=378.94 00:12:06.085 lat (usec): min=247, max=11694, avg=2682.64, stdev=379.35 00:12:06.085 clat percentiles (usec): 00:12:06.085 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2540], 00:12:06.085 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:12:06.085 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 3032], 00:12:06.085 | 99.00th=[ 4359], 99.50th=[ 5014], 99.90th=[ 6325], 99.95th=[ 8717], 00:12:06.085 | 99.99th=[11469] 00:12:06.085 bw ( KiB/s): min=91728, max=96128, per=98.93%, avg=94192.00, stdev=2247.02, samples=3 00:12:06.085 iops : min=22932, max=24032, avg=23548.00, stdev=561.75, samples=3 00:12:06.085 write: IOPS=23.6k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets 00:12:06.085 slat (nsec): min=3754, max=42118, avg=4656.59, stdev=1057.00 00:12:06.085 clat (usec): min=212, max=11519, avg=2691.31, stdev=386.90 00:12:06.085 lat (usec): min=216, max=11541, avg=2695.97, stdev=387.31 00:12:06.085 clat percentiles (usec): 00:12:06.085 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:12:06.085 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:12:06.085 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 3032], 00:12:06.085 | 99.00th=[ 4424], 99.50th=[ 4948], 99.90th=[ 6587], 99.95th=[ 9110], 00:12:06.085 | 99.99th=[11207] 00:12:06.085 bw ( KiB/s): min=91408, max=96064, per=99.60%, avg=94216.00, stdev=2472.00, samples=3 00:12:06.085 iops : min=22852, max=24016, avg=23554.00, stdev=618.00, samples=3 00:12:06.085 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:12:06.085 lat (msec) : 2=0.51%, 4=98.04%, 10=1.38%, 20=0.03% 00:12:06.085 cpu : usr=99.40%, sys=0.05%, ctx=5, majf=0, minf=606 00:12:06.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:06.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.085 issued rwts: total=47628,47320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.085 00:12:06.085 Run status group 0 (all jobs): 00:12:06.085 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:12:06.085 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:12:06.085 ----------------------------------------------------- 00:12:06.085 Suppressions used: 00:12:06.085 count bytes template 00:12:06.085 1 32 /usr/src/fio/parse.c 00:12:06.085 1 8 libtcmalloc_minimal.so 00:12:06.085 ----------------------------------------------------- 00:12:06.085 00:12:06.085 14:17:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:06.085 14:17:30 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:06.085 00:12:06.085 real 0m19.864s 00:12:06.085 user 0m15.126s 00:12:06.085 sys 0m4.828s 00:12:06.085 14:17:30 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.085 ************************************ 00:12:06.085 END TEST nvme_fio 00:12:06.085 ************************************ 00:12:06.085 14:17:30 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 00:12:06.085 real 1m35.787s 00:12:06.085 user 3m43.951s 00:12:06.085 sys 0m25.080s 00:12:06.085 14:17:30 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.085 ************************************ 00:12:06.085 END TEST nvme 00:12:06.085 ************************************ 00:12:06.085 14:17:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 14:17:30 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:06.085 14:17:30 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:06.085 14:17:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.085 14:17:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.085 14:17:30 -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 ************************************ 00:12:06.085 START TEST nvme_scc 00:12:06.085 ************************************ 00:12:06.085 14:17:30 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:06.085 * Looking for test storage... 00:12:06.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:06.085 14:17:30 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.345 14:17:30 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.345 14:17:30 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.345 14:17:30 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.345 14:17:30 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.345 14:17:31 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:06.345 14:17:31 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.345 14:17:31 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.345 --rc genhtml_branch_coverage=1 00:12:06.345 --rc genhtml_function_coverage=1 00:12:06.345 --rc genhtml_legend=1 00:12:06.345 --rc geninfo_all_blocks=1 00:12:06.345 --rc geninfo_unexecuted_blocks=1 00:12:06.345 00:12:06.345 ' 00:12:06.345 14:17:31 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.345 --rc genhtml_branch_coverage=1 00:12:06.345 --rc genhtml_function_coverage=1 00:12:06.345 --rc genhtml_legend=1 00:12:06.345 --rc geninfo_all_blocks=1 00:12:06.345 --rc geninfo_unexecuted_blocks=1 00:12:06.345 00:12:06.345 ' 00:12:06.345 14:17:31 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.346 --rc genhtml_branch_coverage=1 00:12:06.346 --rc genhtml_function_coverage=1 00:12:06.346 --rc genhtml_legend=1 00:12:06.346 --rc geninfo_all_blocks=1 00:12:06.346 --rc geninfo_unexecuted_blocks=1 00:12:06.346 00:12:06.346 ' 00:12:06.346 14:17:31 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.346 --rc genhtml_branch_coverage=1 00:12:06.346 --rc genhtml_function_coverage=1 00:12:06.346 --rc genhtml_legend=1 00:12:06.346 --rc geninfo_all_blocks=1 00:12:06.346 --rc geninfo_unexecuted_blocks=1 00:12:06.346 00:12:06.346 ' 00:12:06.346 14:17:31 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.346 14:17:31 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.346 14:17:31 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.346 14:17:31 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.346 14:17:31 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.346 14:17:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.346 14:17:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.346 14:17:31 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.346 14:17:31 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:06.346 14:17:31 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:06.346 14:17:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:06.346 14:17:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.346 14:17:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:06.346 14:17:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:06.346 14:17:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:06.346 14:17:31 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:06.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.174 Waiting for block devices as requested 00:12:07.433 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.433 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.433 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.693 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.979 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:12.979 14:17:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:12.979 14:17:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:12.979 14:17:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:12.979 14:17:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:12.979 14:17:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.979 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.980 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.981 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:12.982 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.983 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.984 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:12.985 14:17:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:12.985 14:17:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:12.985 14:17:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:12.985 14:17:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.985 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:12.986 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:12.987 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.988 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:12.989 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.990 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:12.991 14:17:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:12.991 14:17:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:12.991 14:17:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:12.991 14:17:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.991 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:12.992 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.993 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:12.994 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.268 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.269 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.270 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:13.271 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.272 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.273 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.274 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.275 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.276 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:13.277 14:17:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:13.277 14:17:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:13.277 14:17:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:13.277 14:17:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.277 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:13.278 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.279 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:13.280 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:13.280 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:13.280 14:17:38 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:14.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.805 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.064 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.064 14:17:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:15.064 14:17:39 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:15.064 14:17:39 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.064 14:17:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:15.064 ************************************ 00:12:15.064 START TEST nvme_simple_copy 00:12:15.064 ************************************ 00:12:15.064 14:17:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:15.324 Initializing NVMe Controllers 00:12:15.324 Attaching to 0000:00:10.0 00:12:15.324 Controller supports SCC. Attached to 0000:00:10.0 00:12:15.324 Namespace ID: 1 size: 6GB 00:12:15.324 Initialization complete. 00:12:15.324 00:12:15.324 Controller QEMU NVMe Ctrl (12340 ) 00:12:15.324 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:15.324 Namespace Block Size:4096 00:12:15.324 Writing LBAs 0 to 63 with Random Data 00:12:15.324 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:15.324 LBAs matching Written Data: 64 00:12:15.324 00:12:15.324 real 0m0.340s 00:12:15.324 user 0m0.125s 00:12:15.324 sys 0m0.113s 00:12:15.324 14:17:40 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.324 ************************************ 00:12:15.324 END TEST nvme_simple_copy 00:12:15.324 ************************************ 00:12:15.324 14:17:40 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 ************************************ 00:12:15.584 END TEST nvme_scc 00:12:15.584 ************************************ 00:12:15.584 00:12:15.584 real 0m9.442s 00:12:15.584 user 0m1.712s 00:12:15.584 sys 0m2.724s 00:12:15.584 14:17:40 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.584 14:17:40 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 14:17:40 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:15.584 14:17:40 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:15.584 14:17:40 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:15.584 14:17:40 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:15.584 14:17:40 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:15.584 14:17:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.584 14:17:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.584 14:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 ************************************ 00:12:15.584 START TEST nvme_fdp 00:12:15.584 ************************************ 00:12:15.584 14:17:40 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:15.584 * Looking for test storage... 00:12:15.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.845 --rc genhtml_branch_coverage=1 00:12:15.845 --rc genhtml_function_coverage=1 00:12:15.845 --rc genhtml_legend=1 00:12:15.845 --rc geninfo_all_blocks=1 00:12:15.845 --rc geninfo_unexecuted_blocks=1 00:12:15.845 00:12:15.845 ' 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.845 --rc genhtml_branch_coverage=1 00:12:15.845 --rc genhtml_function_coverage=1 00:12:15.845 --rc genhtml_legend=1 00:12:15.845 --rc geninfo_all_blocks=1 00:12:15.845 --rc geninfo_unexecuted_blocks=1 00:12:15.845 00:12:15.845 ' 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.845 --rc genhtml_branch_coverage=1 00:12:15.845 --rc genhtml_function_coverage=1 00:12:15.845 --rc genhtml_legend=1 00:12:15.845 --rc geninfo_all_blocks=1 00:12:15.845 --rc geninfo_unexecuted_blocks=1 00:12:15.845 00:12:15.845 ' 00:12:15.845 14:17:40 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.845 --rc genhtml_branch_coverage=1 00:12:15.845 --rc genhtml_function_coverage=1 00:12:15.845 --rc genhtml_legend=1 00:12:15.845 --rc geninfo_all_blocks=1 00:12:15.845 --rc geninfo_unexecuted_blocks=1 00:12:15.845 00:12:15.845 ' 00:12:15.845 14:17:40 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.845 14:17:40 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.845 14:17:40 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.845 14:17:40 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.845 14:17:40 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.845 14:17:40 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:15.845 14:17:40 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:15.845 14:17:40 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:15.845 14:17:40 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.845 14:17:40 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:16.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.674 Waiting for block devices as requested 00:12:16.933 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.933 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.933 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.193 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.480 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:22.480 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:22.480 14:17:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:22.480 14:17:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:22.480 14:17:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:22.480 14:17:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:22.480 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.481 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.482 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:22.483 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.484 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.485 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:22.486 14:17:47 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:22.486 14:17:47 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:22.486 14:17:47 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:22.486 14:17:47 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.486 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:22.487 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.488 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.489 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.490 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.491 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:22.492 14:17:47 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:22.492 14:17:47 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:22.492 14:17:47 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:22.492 14:17:47 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:22.492 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.493 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.494 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:22.495 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.496 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.761 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:22.762 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:22.763 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.764 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:22.765 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.766 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.767 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:22.768 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:22.769 14:17:47 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:22.769 14:17:47 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:22.769 14:17:47 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:22.769 14:17:47 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:22.769 14:17:47 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:22.770 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.771 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:22.772 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:23.032 14:17:47 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:23.032 14:17:47 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:23.033 14:17:47 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:23.033 14:17:47 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:23.033 14:17:47 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:23.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:24.541 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:24.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:24.541 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:24.801 14:17:49 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:24.801 14:17:49 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.801 14:17:49 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.801 14:17:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:24.801 ************************************ 00:12:24.801 START TEST nvme_flexible_data_placement 00:12:24.801 ************************************ 00:12:24.801 14:17:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:25.061 Initializing NVMe Controllers 00:12:25.061 Attaching to 0000:00:13.0 00:12:25.061 Controller supports FDP Attached to 0000:00:13.0 00:12:25.061 Namespace ID: 1 Endurance Group ID: 1 00:12:25.061 Initialization complete. 00:12:25.061 00:12:25.061 ================================== 00:12:25.061 == FDP tests for Namespace: #01 == 00:12:25.061 ================================== 00:12:25.061 00:12:25.061 Get Feature: FDP: 00:12:25.061 ================= 00:12:25.061 Enabled: Yes 00:12:25.061 FDP configuration Index: 0 00:12:25.061 00:12:25.061 FDP configurations log page 00:12:25.061 =========================== 00:12:25.061 Number of FDP configurations: 1 00:12:25.061 Version: 0 00:12:25.061 Size: 112 00:12:25.061 FDP Configuration Descriptor: 0 00:12:25.061 Descriptor Size: 96 00:12:25.061 Reclaim Group Identifier format: 2 00:12:25.061 FDP Volatile Write Cache: Not Present 00:12:25.061 FDP Configuration: Valid 00:12:25.061 Vendor Specific Size: 0 00:12:25.061 Number of Reclaim Groups: 2 00:12:25.061 Number of Recalim Unit Handles: 8 00:12:25.061 Max Placement Identifiers: 128 00:12:25.061 Number of Namespaces Suppprted: 256 00:12:25.061 Reclaim unit Nominal Size: 6000000 bytes 00:12:25.061 Estimated Reclaim Unit Time Limit: Not Reported 00:12:25.061 RUH Desc #000: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #001: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #002: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #003: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #004: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #005: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #006: RUH Type: Initially Isolated 00:12:25.061 RUH Desc #007: RUH Type: Initially Isolated 00:12:25.061 00:12:25.061 FDP reclaim unit handle usage log page 00:12:25.061 ====================================== 00:12:25.061 Number of Reclaim Unit Handles: 8 00:12:25.061 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:25.061 RUH Usage Desc #001: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #002: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #003: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #004: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #005: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #006: RUH Attributes: Unused 00:12:25.061 RUH Usage Desc #007: RUH Attributes: Unused 00:12:25.061 00:12:25.061 FDP statistics log page 00:12:25.061 ======================= 00:12:25.061 Host bytes with metadata written: 1124610048 00:12:25.061 Media bytes with metadata written: 1124876288 00:12:25.061 Media bytes erased: 0 00:12:25.061 00:12:25.061 FDP Reclaim unit handle status 00:12:25.061 ============================== 00:12:25.061 Number of RUHS descriptors: 2 00:12:25.061 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004f7d 00:12:25.061 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:25.061 00:12:25.061 FDP write on placement id: 0 success 00:12:25.061 00:12:25.061 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:25.061 00:12:25.061 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:25.061 00:12:25.061 Get Feature: FDP Events for Placement handle: #0 00:12:25.061 ======================== 00:12:25.061 Number of FDP Events: 6 00:12:25.061 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:25.061 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:25.061 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:25.061 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:25.061 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:25.061 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:25.061 00:12:25.061 FDP events log page 00:12:25.061 =================== 00:12:25.061 Number of FDP events: 1 00:12:25.061 FDP Event #0: 00:12:25.061 Event Type: RU Not Written to Capacity 00:12:25.061 Placement Identifier: Valid 00:12:25.061 NSID: Valid 00:12:25.061 Location: Valid 00:12:25.061 Placement Identifier: 0 00:12:25.061 Event Timestamp: 8 00:12:25.061 Namespace Identifier: 1 00:12:25.061 Reclaim Group Identifier: 0 00:12:25.061 Reclaim Unit Handle Identifier: 0 00:12:25.061 00:12:25.061 FDP test passed 00:12:25.061 00:12:25.061 real 0m0.307s 00:12:25.061 user 0m0.091s 00:12:25.061 sys 0m0.114s 00:12:25.061 14:17:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.061 14:17:49 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:25.061 ************************************ 00:12:25.061 END TEST nvme_flexible_data_placement 00:12:25.061 ************************************ 00:12:25.061 00:12:25.061 real 0m9.490s 00:12:25.061 user 0m1.827s 00:12:25.061 sys 0m2.765s 00:12:25.061 14:17:49 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.061 14:17:49 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:25.061 ************************************ 00:12:25.061 END TEST nvme_fdp 00:12:25.061 ************************************ 00:12:25.061 14:17:49 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:25.061 14:17:49 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:25.061 14:17:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.061 14:17:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.061 14:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.061 ************************************ 00:12:25.061 START TEST nvme_rpc 00:12:25.061 ************************************ 00:12:25.061 14:17:49 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:25.321 * Looking for test storage... 00:12:25.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:25.321 14:17:49 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:25.321 14:17:50 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:25.321 14:17:50 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:25.321 14:17:50 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.321 14:17:50 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.322 14:17:50 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.322 --rc genhtml_branch_coverage=1 00:12:25.322 --rc genhtml_function_coverage=1 00:12:25.322 --rc genhtml_legend=1 00:12:25.322 --rc geninfo_all_blocks=1 00:12:25.322 --rc geninfo_unexecuted_blocks=1 00:12:25.322 00:12:25.322 ' 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.322 --rc genhtml_branch_coverage=1 00:12:25.322 --rc genhtml_function_coverage=1 00:12:25.322 --rc genhtml_legend=1 00:12:25.322 --rc geninfo_all_blocks=1 00:12:25.322 --rc geninfo_unexecuted_blocks=1 00:12:25.322 00:12:25.322 ' 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.322 --rc genhtml_branch_coverage=1 00:12:25.322 --rc genhtml_function_coverage=1 00:12:25.322 --rc genhtml_legend=1 00:12:25.322 --rc geninfo_all_blocks=1 00:12:25.322 --rc geninfo_unexecuted_blocks=1 00:12:25.322 00:12:25.322 ' 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.322 --rc genhtml_branch_coverage=1 00:12:25.322 --rc genhtml_function_coverage=1 00:12:25.322 --rc genhtml_legend=1 00:12:25.322 --rc geninfo_all_blocks=1 00:12:25.322 --rc geninfo_unexecuted_blocks=1 00:12:25.322 00:12:25.322 ' 00:12:25.322 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.322 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:25.322 14:17:50 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:25.581 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:25.581 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68340 00:12:25.581 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:25.581 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:25.581 14:17:50 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68340 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68340 ']' 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.581 14:17:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.581 [2024-12-10 14:17:50.330483] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:12:25.581 [2024-12-10 14:17:50.330621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68340 ] 00:12:25.840 [2024-12-10 14:17:50.516893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:25.840 [2024-12-10 14:17:50.650132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.840 [2024-12-10 14:17:50.650183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.219 14:17:51 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.219 14:17:51 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:27.219 14:17:51 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:27.219 Nvme0n1 00:12:27.219 14:17:51 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:27.219 14:17:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:27.478 request: 00:12:27.478 { 00:12:27.478 "bdev_name": "Nvme0n1", 00:12:27.478 "filename": "non_existing_file", 00:12:27.478 "method": "bdev_nvme_apply_firmware", 00:12:27.478 "req_id": 1 00:12:27.478 } 00:12:27.478 Got JSON-RPC error response 00:12:27.478 response: 00:12:27.478 { 00:12:27.478 "code": -32603, 00:12:27.478 "message": "open file failed." 00:12:27.478 } 00:12:27.478 14:17:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:27.478 14:17:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:27.478 14:17:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:27.737 14:17:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:27.737 14:17:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68340 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68340 ']' 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68340 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68340 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.737 killing process with pid 68340 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68340' 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@973 -- # kill 68340 00:12:27.737 14:17:52 nvme_rpc -- common/autotest_common.sh@978 -- # wait 68340 00:12:30.274 00:12:30.274 real 0m4.638s 00:12:30.274 user 0m8.244s 00:12:30.274 sys 0m0.924s 00:12:30.274 14:17:54 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.274 ************************************ 00:12:30.274 END TEST nvme_rpc 00:12:30.274 ************************************ 00:12:30.274 14:17:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 14:17:54 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:30.274 14:17:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.274 14:17:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.274 14:17:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 ************************************ 00:12:30.274 START TEST nvme_rpc_timeouts 00:12:30.274 ************************************ 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:30.274 * Looking for test storage... 00:12:30.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.274 14:17:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.274 --rc genhtml_branch_coverage=1 00:12:30.274 --rc genhtml_function_coverage=1 00:12:30.274 --rc genhtml_legend=1 00:12:30.274 --rc geninfo_all_blocks=1 00:12:30.274 --rc geninfo_unexecuted_blocks=1 00:12:30.274 00:12:30.274 ' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.274 --rc genhtml_branch_coverage=1 00:12:30.274 --rc genhtml_function_coverage=1 00:12:30.274 --rc genhtml_legend=1 00:12:30.274 --rc geninfo_all_blocks=1 00:12:30.274 --rc geninfo_unexecuted_blocks=1 00:12:30.274 00:12:30.274 ' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.274 --rc genhtml_branch_coverage=1 00:12:30.274 --rc genhtml_function_coverage=1 00:12:30.274 --rc genhtml_legend=1 00:12:30.274 --rc geninfo_all_blocks=1 00:12:30.274 --rc geninfo_unexecuted_blocks=1 00:12:30.274 00:12:30.274 ' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.274 --rc genhtml_branch_coverage=1 00:12:30.274 --rc genhtml_function_coverage=1 00:12:30.274 --rc genhtml_legend=1 00:12:30.274 --rc geninfo_all_blocks=1 00:12:30.274 --rc geninfo_unexecuted_blocks=1 00:12:30.274 00:12:30.274 ' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68416 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68416 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68449 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:30.274 14:17:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68449 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68449 ']' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.274 14:17:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 [2024-12-10 14:17:54.932870] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:12:30.274 [2024-12-10 14:17:54.933022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68449 ] 00:12:30.533 [2024-12-10 14:17:55.120581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:30.533 [2024-12-10 14:17:55.228842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.533 [2024-12-10 14:17:55.228864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.468 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.469 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:12:31.469 Checking default timeout settings: 00:12:31.469 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:31.469 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:31.727 Making settings changes with rpc: 00:12:31.727 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:31.727 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:31.727 Check default vs. modified settings: 00:12:31.727 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:31.727 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:32.295 Setting action_on_timeout is changed as expected. 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:32.295 Setting timeout_us is changed as expected. 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:32.295 Setting timeout_admin_us is changed as expected. 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68416 /tmp/settings_modified_68416 00:12:32.295 14:17:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68449 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68449 ']' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68449 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68449 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.295 killing process with pid 68449 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68449' 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68449 00:12:32.295 14:17:56 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68449 00:12:34.831 RPC TIMEOUT SETTING TEST PASSED. 00:12:34.831 14:17:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:34.831 00:12:34.831 real 0m4.691s 00:12:34.831 user 0m8.730s 00:12:34.831 sys 0m0.777s 00:12:34.831 14:17:59 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.831 14:17:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:34.831 ************************************ 00:12:34.831 END TEST nvme_rpc_timeouts 00:12:34.831 ************************************ 00:12:34.831 14:17:59 -- spdk/autotest.sh@239 -- # uname -s 00:12:34.831 14:17:59 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:34.831 14:17:59 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:34.831 14:17:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:34.831 14:17:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.831 14:17:59 -- common/autotest_common.sh@10 -- # set +x 00:12:34.831 ************************************ 00:12:34.831 START TEST sw_hotplug 00:12:34.831 ************************************ 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:34.831 * Looking for test storage... 00:12:34.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.831 14:17:59 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:34.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.831 --rc genhtml_branch_coverage=1 00:12:34.831 --rc genhtml_function_coverage=1 00:12:34.831 --rc genhtml_legend=1 00:12:34.831 --rc geninfo_all_blocks=1 00:12:34.831 --rc geninfo_unexecuted_blocks=1 00:12:34.831 00:12:34.831 ' 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:34.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.831 --rc genhtml_branch_coverage=1 00:12:34.831 --rc genhtml_function_coverage=1 00:12:34.831 --rc genhtml_legend=1 00:12:34.831 --rc geninfo_all_blocks=1 00:12:34.831 --rc geninfo_unexecuted_blocks=1 00:12:34.831 00:12:34.831 ' 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:34.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.831 --rc genhtml_branch_coverage=1 00:12:34.831 --rc genhtml_function_coverage=1 00:12:34.831 --rc genhtml_legend=1 00:12:34.831 --rc geninfo_all_blocks=1 00:12:34.831 --rc geninfo_unexecuted_blocks=1 00:12:34.831 00:12:34.831 ' 00:12:34.831 14:17:59 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:34.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.831 --rc genhtml_branch_coverage=1 00:12:34.831 --rc genhtml_function_coverage=1 00:12:34.831 --rc genhtml_legend=1 00:12:34.831 --rc geninfo_all_blocks=1 00:12:34.831 --rc geninfo_unexecuted_blocks=1 00:12:34.831 00:12:34.831 ' 00:12:34.832 14:17:59 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:35.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.659 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.659 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.659 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.659 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:35.919 14:18:00 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:35.919 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:36.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.748 Waiting for block devices as requested 00:12:36.748 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.007 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.266 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.543 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:42.543 14:18:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:42.543 14:18:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.140 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:43.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.140 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:43.434 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:43.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69339 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:43.953 14:18:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:43.953 14:18:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:44.213 Initializing NVMe Controllers 00:12:44.213 Attaching to 0000:00:10.0 00:12:44.213 Attaching to 0000:00:11.0 00:12:44.213 Attached to 0000:00:11.0 00:12:44.213 Attached to 0000:00:10.0 00:12:44.213 Initialization complete. Starting I/O... 00:12:44.213 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:44.213 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:44.213 00:12:45.592 QEMU NVMe Ctrl (12341 ): 1448 I/Os completed (+1448) 00:12:45.592 QEMU NVMe Ctrl (12340 ): 1457 I/Os completed (+1457) 00:12:45.592 00:12:46.529 QEMU NVMe Ctrl (12341 ): 3436 I/Os completed (+1988) 00:12:46.529 QEMU NVMe Ctrl (12340 ): 3448 I/Os completed (+1991) 00:12:46.529 00:12:47.467 QEMU NVMe Ctrl (12341 ): 5572 I/Os completed (+2136) 00:12:47.467 QEMU NVMe Ctrl (12340 ): 5591 I/Os completed (+2143) 00:12:47.467 00:12:48.404 QEMU NVMe Ctrl (12341 ): 7648 I/Os completed (+2076) 00:12:48.404 QEMU NVMe Ctrl (12340 ): 7678 I/Os completed (+2087) 00:12:48.404 00:12:49.341 QEMU NVMe Ctrl (12341 ): 9772 I/Os completed (+2124) 00:12:49.341 QEMU NVMe Ctrl (12340 ): 9803 I/Os completed (+2125) 00:12:49.341 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.280 [2024-12-10 14:18:14.771883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:50.280 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:50.280 [2024-12-10 14:18:14.774034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.774106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.774134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.774161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:50.280 [2024-12-10 14:18:14.777026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.777088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.777109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.777132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.280 [2024-12-10 14:18:14.813913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:50.280 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:50.280 [2024-12-10 14:18:14.815649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.815718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.815750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.815775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:50.280 [2024-12-10 14:18:14.818467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.818510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.818535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 [2024-12-10 14:18:14.818555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.280 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:50.280 EAL: Scan for (pci) bus failed. 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.280 14:18:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:50.280 00:12:50.280 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:50.280 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.280 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.280 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.280 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:50.280 Attaching to 0000:00:10.0 00:12:50.280 Attached to 0000:00:10.0 00:12:50.539 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:50.539 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.539 14:18:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.539 Attaching to 0000:00:11.0 00:12:50.539 Attached to 0000:00:11.0 00:12:51.477 QEMU NVMe Ctrl (12340 ): 1924 I/Os completed (+1924) 00:12:51.477 QEMU NVMe Ctrl (12341 ): 1711 I/Os completed (+1711) 00:12:51.477 00:12:52.415 QEMU NVMe Ctrl (12340 ): 4032 I/Os completed (+2108) 00:12:52.415 QEMU NVMe Ctrl (12341 ): 3822 I/Os completed (+2111) 00:12:52.415 00:12:53.352 QEMU NVMe Ctrl (12340 ): 6144 I/Os completed (+2112) 00:12:53.352 QEMU NVMe Ctrl (12341 ): 5934 I/Os completed (+2112) 00:12:53.352 00:12:54.289 QEMU NVMe Ctrl (12340 ): 8248 I/Os completed (+2104) 00:12:54.289 QEMU NVMe Ctrl (12341 ): 8043 I/Os completed (+2109) 00:12:54.289 00:12:55.229 QEMU NVMe Ctrl (12340 ): 10340 I/Os completed (+2092) 00:12:55.229 QEMU NVMe Ctrl (12341 ): 10137 I/Os completed (+2094) 00:12:55.229 00:12:56.167 QEMU NVMe Ctrl (12340 ): 12412 I/Os completed (+2072) 00:12:56.167 QEMU NVMe Ctrl (12341 ): 12209 I/Os completed (+2072) 00:12:56.167 00:12:57.547 QEMU NVMe Ctrl (12340 ): 14548 I/Os completed (+2136) 00:12:57.547 QEMU NVMe Ctrl (12341 ): 14351 I/Os completed (+2142) 00:12:57.547 00:12:58.484 QEMU NVMe Ctrl (12340 ): 16620 I/Os completed (+2072) 00:12:58.484 QEMU NVMe Ctrl (12341 ): 16423 I/Os completed (+2072) 00:12:58.484 00:12:59.422 QEMU NVMe Ctrl (12340 ): 18728 I/Os completed (+2108) 00:12:59.422 QEMU NVMe Ctrl (12341 ): 18540 I/Os completed (+2117) 00:12:59.422 00:13:00.360 QEMU NVMe Ctrl (12340 ): 20804 I/Os completed (+2076) 00:13:00.360 QEMU NVMe Ctrl (12341 ): 20616 I/Os completed (+2076) 00:13:00.360 00:13:01.298 QEMU NVMe Ctrl (12340 ): 22880 I/Os completed (+2076) 00:13:01.298 QEMU NVMe Ctrl (12341 ): 22695 I/Os completed (+2079) 00:13:01.298 00:13:02.235 QEMU NVMe Ctrl (12340 ): 24984 I/Os completed (+2104) 00:13:02.235 QEMU NVMe Ctrl (12341 ): 24799 I/Os completed (+2104) 00:13:02.235 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.495 [2024-12-10 14:18:27.170832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:02.495 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:02.495 [2024-12-10 14:18:27.172949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.173061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.173135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.173189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:02.495 [2024-12-10 14:18:27.176421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.176519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.176568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.176618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.495 [2024-12-10 14:18:27.211131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:02.495 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:02.495 [2024-12-10 14:18:27.213056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.213106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.213137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.213161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:02.495 [2024-12-10 14:18:27.215874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.215919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.215943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 [2024-12-10 14:18:27.215966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.495 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:02.495 EAL: Scan for (pci) bus failed. 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:02.495 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:02.754 Attaching to 0000:00:10.0 00:13:02.754 Attached to 0000:00:10.0 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:02.754 14:18:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:02.754 Attaching to 0000:00:11.0 00:13:02.754 Attached to 0000:00:11.0 00:13:03.323 QEMU NVMe Ctrl (12340 ): 1112 I/Os completed (+1112) 00:13:03.323 QEMU NVMe Ctrl (12341 ): 880 I/Os completed (+880) 00:13:03.323 00:13:04.261 QEMU NVMe Ctrl (12340 ): 3164 I/Os completed (+2052) 00:13:04.261 QEMU NVMe Ctrl (12341 ): 2932 I/Os completed (+2052) 00:13:04.261 00:13:05.199 QEMU NVMe Ctrl (12340 ): 5220 I/Os completed (+2056) 00:13:05.199 QEMU NVMe Ctrl (12341 ): 4988 I/Os completed (+2056) 00:13:05.199 00:13:06.137 QEMU NVMe Ctrl (12340 ): 7240 I/Os completed (+2020) 00:13:06.137 QEMU NVMe Ctrl (12341 ): 7008 I/Os completed (+2020) 00:13:06.137 00:13:07.516 QEMU NVMe Ctrl (12340 ): 9320 I/Os completed (+2080) 00:13:07.516 QEMU NVMe Ctrl (12341 ): 9096 I/Os completed (+2088) 00:13:07.516 00:13:08.467 QEMU NVMe Ctrl (12340 ): 11376 I/Os completed (+2056) 00:13:08.467 QEMU NVMe Ctrl (12341 ): 11152 I/Os completed (+2056) 00:13:08.467 00:13:09.462 QEMU NVMe Ctrl (12340 ): 13468 I/Os completed (+2092) 00:13:09.462 QEMU NVMe Ctrl (12341 ): 13250 I/Os completed (+2098) 00:13:09.462 00:13:10.400 QEMU NVMe Ctrl (12340 ): 15576 I/Os completed (+2108) 00:13:10.400 QEMU NVMe Ctrl (12341 ): 15367 I/Os completed (+2117) 00:13:10.400 00:13:11.336 QEMU NVMe Ctrl (12340 ): 17632 I/Os completed (+2056) 00:13:11.336 QEMU NVMe Ctrl (12341 ): 17423 I/Os completed (+2056) 00:13:11.336 00:13:12.274 QEMU NVMe Ctrl (12340 ): 19748 I/Os completed (+2116) 00:13:12.274 QEMU NVMe Ctrl (12341 ): 19549 I/Os completed (+2126) 00:13:12.274 00:13:13.211 QEMU NVMe Ctrl (12340 ): 21844 I/Os completed (+2096) 00:13:13.212 QEMU NVMe Ctrl (12341 ): 21649 I/Os completed (+2100) 00:13:13.212 00:13:14.152 QEMU NVMe Ctrl (12340 ): 23956 I/Os completed (+2112) 00:13:14.152 QEMU NVMe Ctrl (12341 ): 23766 I/Os completed (+2117) 00:13:14.152 00:13:14.720 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:14.720 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:14.980 [2024-12-10 14:18:39.555990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:14.980 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:14.980 [2024-12-10 14:18:39.557896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.557960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.557988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.558015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:14.980 [2024-12-10 14:18:39.561128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.561353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.561384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.561408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:14.980 [2024-12-10 14:18:39.598900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:14.980 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:14.980 [2024-12-10 14:18:39.600918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.601128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.601167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.601195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:14.980 [2024-12-10 14:18:39.603981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.604026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.604053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 [2024-12-10 14:18:39.604076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:14.980 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_device 00:13:14.980 EAL: Scan for (pci) bus failed. 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:14.980 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:15.240 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:15.241 Attaching to 0000:00:10.0 00:13:15.241 Attached to 0000:00:10.0 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:15.241 14:18:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:15.241 Attaching to 0000:00:11.0 00:13:15.241 Attached to 0000:00:11.0 00:13:15.241 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:15.241 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:15.241 [2024-12-10 14:18:39.931813] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:27.458 14:18:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:27.458 14:18:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:27.458 14:18:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.16 00:13:27.458 14:18:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.16 00:13:27.458 14:18:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:27.458 14:18:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.16 00:13:27.458 14:18:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.16 2 00:13:27.458 remove_attach_helper took 43.16s to complete (handling 2 nvme drive(s)) 14:18:51 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69339 00:13:34.032 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69339) - No such process 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69339 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69880 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:34.032 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69880 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69880 ']' 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.032 14:18:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.032 [2024-12-10 14:18:58.057584] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:13:34.032 [2024-12-10 14:18:58.057940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69880 ] 00:13:34.032 [2024-12-10 14:18:58.240512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.032 [2024-12-10 14:18:58.351000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:34.600 14:18:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:34.600 14:18:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.170 [2024-12-10 14:19:05.261000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:41.170 [2024-12-10 14:19:05.263777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.263850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.263874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.263944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.263961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.263982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.263999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.264015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.264028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.264049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.264062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.264079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:41.170 [2024-12-10 14:19:05.760095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:41.170 [2024-12-10 14:19:05.762458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.762643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.762689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.762713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.762731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.762745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.762763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.762777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.762795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 [2024-12-10 14:19:05.762812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.170 [2024-12-10 14:19:05.762829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.170 [2024-12-10 14:19:05.762843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.170 14:19:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.170 14:19:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.429 14:19:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.649 [2024-12-10 14:19:18.339844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:53.649 [2024-12-10 14:19:18.342439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.649 [2024-12-10 14:19:18.342613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.649 [2024-12-10 14:19:18.342795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.649 [2024-12-10 14:19:18.342979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.649 [2024-12-10 14:19:18.343023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.649 [2024-12-10 14:19:18.343143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.649 [2024-12-10 14:19:18.343209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.649 [2024-12-10 14:19:18.343295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.649 [2024-12-10 14:19:18.343360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.649 [2024-12-10 14:19:18.343469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:53.649 [2024-12-10 14:19:18.343511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.649 [2024-12-10 14:19:18.343574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.649 14:19:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:53.649 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:53.908 [2024-12-10 14:19:18.739189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:54.168 [2024-12-10 14:19:18.741783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.168 [2024-12-10 14:19:18.741960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.168 [2024-12-10 14:19:18.742095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.168 [2024-12-10 14:19:18.742163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.168 [2024-12-10 14:19:18.742265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.168 [2024-12-10 14:19:18.742329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.168 [2024-12-10 14:19:18.742442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.168 [2024-12-10 14:19:18.742485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.168 [2024-12-10 14:19:18.742546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.168 [2024-12-10 14:19:18.742656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.168 [2024-12-10 14:19:18.742725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.168 [2024-12-10 14:19:18.742917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.168 14:19:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.168 14:19:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.168 14:19:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:54.168 14:19:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.427 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.686 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.686 14:19:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.932 14:19:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.932 14:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.932 14:19:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.932 14:19:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.932 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.932 14:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.932 [2024-12-10 14:19:31.418768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:06.932 [2024-12-10 14:19:31.421178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.932 [2024-12-10 14:19:31.421232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.932 [2024-12-10 14:19:31.421251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.932 [2024-12-10 14:19:31.421280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.933 [2024-12-10 14:19:31.421295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.933 [2024-12-10 14:19:31.421315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.933 [2024-12-10 14:19:31.421332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.933 [2024-12-10 14:19:31.421349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.933 [2024-12-10 14:19:31.421363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.933 [2024-12-10 14:19:31.421381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.933 [2024-12-10 14:19:31.421395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.933 [2024-12-10 14:19:31.421413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.933 14:19:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.933 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:06.933 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:07.192 [2024-12-10 14:19:31.818109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:07.192 [2024-12-10 14:19:31.820621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.192 [2024-12-10 14:19:31.820666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.192 [2024-12-10 14:19:31.820704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.192 [2024-12-10 14:19:31.820725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.192 [2024-12-10 14:19:31.820741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.192 [2024-12-10 14:19:31.820755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.192 [2024-12-10 14:19:31.820773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.192 [2024-12-10 14:19:31.820786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.192 [2024-12-10 14:19:31.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.192 [2024-12-10 14:19:31.820843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.192 [2024-12-10 14:19:31.820859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.192 [2024-12-10 14:19:31.820872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.192 14:19:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.192 14:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.192 14:19:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:07.192 14:19:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.451 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:07.709 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:07.709 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.709 14:19:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:14:19.919 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:19.919 14:19:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:19.919 14:19:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.477 14:19:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.477 14:19:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.477 [2024-12-10 14:19:50.488587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:26.477 [2024-12-10 14:19:50.490948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.491003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.491023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.491056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.491069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.491085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.491099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.491114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.491127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.491144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.491155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.491175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 14:19:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:26.477 14:19:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:26.477 [2024-12-10 14:19:50.887927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:26.477 [2024-12-10 14:19:50.890246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.890293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.890314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.890336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.890352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.890364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.890383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.890395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.890411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.890423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.477 [2024-12-10 14:19:50.890438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.477 [2024-12-10 14:19:50.890449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.477 [2024-12-10 14:19:50.890470] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:14:26.477 [2024-12-10 14:19:50.890485] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:14:26.477 [2024-12-10 14:19:50.890502] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:14:26.477 [2024-12-10 14:19:50.890513] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.477 14:19:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.477 14:19:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.477 14:19:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:26.477 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:26.735 14:19:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.933 [2024-12-10 14:20:03.567510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:38.933 [2024-12-10 14:20:03.569880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.933 [2024-12-10 14:20:03.570041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.933 [2024-12-10 14:20:03.570151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.933 [2024-12-10 14:20:03.570361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.933 [2024-12-10 14:20:03.570404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.933 [2024-12-10 14:20:03.570460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.933 [2024-12-10 14:20:03.570583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.933 [2024-12-10 14:20:03.570628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.933 [2024-12-10 14:20:03.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.933 [2024-12-10 14:20:03.570820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.933 [2024-12-10 14:20:03.570860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.933 [2024-12-10 14:20:03.570898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.933 14:20:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:38.933 14:20:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:39.191 [2024-12-10 14:20:03.966851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:39.191 [2024-12-10 14:20:03.969216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.192 [2024-12-10 14:20:03.969255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.192 [2024-12-10 14:20:03.969274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.192 [2024-12-10 14:20:03.969294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.192 [2024-12-10 14:20:03.969308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.192 [2024-12-10 14:20:03.969320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.192 [2024-12-10 14:20:03.969336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.192 [2024-12-10 14:20:03.969347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.192 [2024-12-10 14:20:03.969362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.192 [2024-12-10 14:20:03.969375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:39.192 [2024-12-10 14:20:03.969389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.192 [2024-12-10 14:20:03.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:39.449 14:20:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.449 14:20:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:39.449 14:20:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:39.449 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:39.707 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:39.965 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:39.965 14:20:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:52.163 [2024-12-10 14:20:16.646447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:52.163 [2024-12-10 14:20:16.649350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.163 [2024-12-10 14:20:16.649538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.163 [2024-12-10 14:20:16.649772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.163 [2024-12-10 14:20:16.649914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.163 [2024-12-10 14:20:16.649954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.163 [2024-12-10 14:20:16.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.163 [2024-12-10 14:20:16.650186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.163 [2024-12-10 14:20:16.650231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.163 [2024-12-10 14:20:16.650326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.163 [2024-12-10 14:20:16.650387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.163 [2024-12-10 14:20:16.650569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.163 [2024-12-10 14:20:16.650631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:52.163 14:20:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:52.163 14:20:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:52.422 [2024-12-10 14:20:17.045794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:52.422 [2024-12-10 14:20:17.050986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.422 [2024-12-10 14:20:17.051126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.422 [2024-12-10 14:20:17.051302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.422 [2024-12-10 14:20:17.051366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.422 [2024-12-10 14:20:17.051459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.422 [2024-12-10 14:20:17.051516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.422 [2024-12-10 14:20:17.051617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.422 [2024-12-10 14:20:17.051655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.422 [2024-12-10 14:20:17.051728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.422 [2024-12-10 14:20:17.051839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:52.422 [2024-12-10 14:20:17.051880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.422 [2024-12-10 14:20:17.051931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:52.422 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:52.422 14:20:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.422 14:20:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:52.422 14:20:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:52.680 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:52.938 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:52.938 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:52.938 14:20:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.24 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.24 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:15:05.137 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:05.137 14:20:29 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69880 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69880 ']' 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69880 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69880 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69880' 00:15:05.137 killing process with pid 69880 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69880 00:15:05.137 14:20:29 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69880 00:15:07.700 14:20:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:08.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:08.837 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.837 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.837 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.837 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.837 00:15:08.837 real 2m34.286s 00:15:08.837 user 1m50.686s 00:15:08.837 sys 0m23.774s 00:15:08.837 14:20:33 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.837 ************************************ 00:15:08.837 END TEST sw_hotplug 00:15:08.837 ************************************ 00:15:08.837 14:20:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:09.096 14:20:33 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:09.096 14:20:33 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:09.096 14:20:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.096 14:20:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.096 14:20:33 -- common/autotest_common.sh@10 -- # set +x 00:15:09.096 ************************************ 00:15:09.096 START TEST nvme_xnvme 00:15:09.096 ************************************ 00:15:09.096 14:20:33 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:09.096 * Looking for test storage... 00:15:09.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.096 14:20:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.096 14:20:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.096 14:20:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.358 14:20:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.358 14:20:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.359 14:20:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.359 --rc genhtml_branch_coverage=1 00:15:09.359 --rc genhtml_function_coverage=1 00:15:09.359 --rc genhtml_legend=1 00:15:09.359 --rc geninfo_all_blocks=1 00:15:09.359 --rc geninfo_unexecuted_blocks=1 00:15:09.359 00:15:09.359 ' 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.359 --rc genhtml_branch_coverage=1 00:15:09.359 --rc genhtml_function_coverage=1 00:15:09.359 --rc genhtml_legend=1 00:15:09.359 --rc geninfo_all_blocks=1 00:15:09.359 --rc geninfo_unexecuted_blocks=1 00:15:09.359 00:15:09.359 ' 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.359 --rc genhtml_branch_coverage=1 00:15:09.359 --rc genhtml_function_coverage=1 00:15:09.359 --rc genhtml_legend=1 00:15:09.359 --rc geninfo_all_blocks=1 00:15:09.359 --rc geninfo_unexecuted_blocks=1 00:15:09.359 00:15:09.359 ' 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.359 --rc genhtml_branch_coverage=1 00:15:09.359 --rc genhtml_function_coverage=1 00:15:09.359 --rc genhtml_legend=1 00:15:09.359 --rc geninfo_all_blocks=1 00:15:09.359 --rc geninfo_unexecuted_blocks=1 00:15:09.359 00:15:09.359 ' 00:15:09.359 14:20:33 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:09.359 14:20:33 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:09.359 14:20:33 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:09.359 14:20:33 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:09.359 14:20:33 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:09.359 14:20:34 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:09.360 #define SPDK_CONFIG_H 00:15:09.360 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:09.360 #define SPDK_CONFIG_APPS 1 00:15:09.360 #define SPDK_CONFIG_ARCH native 00:15:09.360 #define SPDK_CONFIG_ASAN 1 00:15:09.360 #undef SPDK_CONFIG_AVAHI 00:15:09.360 #undef SPDK_CONFIG_CET 00:15:09.360 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:09.360 #define SPDK_CONFIG_COVERAGE 1 00:15:09.360 #define SPDK_CONFIG_CROSS_PREFIX 00:15:09.360 #undef SPDK_CONFIG_CRYPTO 00:15:09.360 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:09.360 #undef SPDK_CONFIG_CUSTOMOCF 00:15:09.360 #undef SPDK_CONFIG_DAOS 00:15:09.360 #define SPDK_CONFIG_DAOS_DIR 00:15:09.360 #define SPDK_CONFIG_DEBUG 1 00:15:09.360 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:09.360 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:09.360 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:09.360 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:09.360 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:09.360 #undef SPDK_CONFIG_DPDK_UADK 00:15:09.360 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:09.360 #define SPDK_CONFIG_EXAMPLES 1 00:15:09.360 #undef SPDK_CONFIG_FC 00:15:09.360 #define SPDK_CONFIG_FC_PATH 00:15:09.360 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:09.360 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:09.360 #define SPDK_CONFIG_FSDEV 1 00:15:09.360 #undef SPDK_CONFIG_FUSE 00:15:09.360 #undef SPDK_CONFIG_FUZZER 00:15:09.360 #define SPDK_CONFIG_FUZZER_LIB 00:15:09.360 #undef SPDK_CONFIG_GOLANG 00:15:09.360 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:09.360 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:09.360 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:09.360 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:09.360 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:09.360 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:09.360 #undef SPDK_CONFIG_HAVE_LZ4 00:15:09.360 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:09.360 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:09.360 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:09.360 #define SPDK_CONFIG_IDXD 1 00:15:09.360 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:09.360 #undef SPDK_CONFIG_IPSEC_MB 00:15:09.360 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:09.360 #define SPDK_CONFIG_ISAL 1 00:15:09.360 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:09.360 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:09.360 #define SPDK_CONFIG_LIBDIR 00:15:09.360 #undef SPDK_CONFIG_LTO 00:15:09.360 #define SPDK_CONFIG_MAX_LCORES 128 00:15:09.360 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:09.360 #define SPDK_CONFIG_NVME_CUSE 1 00:15:09.360 #undef SPDK_CONFIG_OCF 00:15:09.360 #define SPDK_CONFIG_OCF_PATH 00:15:09.360 #define SPDK_CONFIG_OPENSSL_PATH 00:15:09.360 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:09.360 #define SPDK_CONFIG_PGO_DIR 00:15:09.360 #undef SPDK_CONFIG_PGO_USE 00:15:09.360 #define SPDK_CONFIG_PREFIX /usr/local 00:15:09.360 #undef SPDK_CONFIG_RAID5F 00:15:09.360 #undef SPDK_CONFIG_RBD 00:15:09.360 #define SPDK_CONFIG_RDMA 1 00:15:09.360 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:09.360 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:09.360 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:09.360 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:09.360 #define SPDK_CONFIG_SHARED 1 00:15:09.360 #undef SPDK_CONFIG_SMA 00:15:09.360 #define SPDK_CONFIG_TESTS 1 00:15:09.360 #undef SPDK_CONFIG_TSAN 00:15:09.360 #define SPDK_CONFIG_UBLK 1 00:15:09.360 #define SPDK_CONFIG_UBSAN 1 00:15:09.360 #undef SPDK_CONFIG_UNIT_TESTS 00:15:09.360 #undef SPDK_CONFIG_URING 00:15:09.360 #define SPDK_CONFIG_URING_PATH 00:15:09.360 #undef SPDK_CONFIG_URING_ZNS 00:15:09.360 #undef SPDK_CONFIG_USDT 00:15:09.360 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:09.360 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:09.360 #undef SPDK_CONFIG_VFIO_USER 00:15:09.360 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:09.360 #define SPDK_CONFIG_VHOST 1 00:15:09.360 #define SPDK_CONFIG_VIRTIO 1 00:15:09.360 #undef SPDK_CONFIG_VTUNE 00:15:09.360 #define SPDK_CONFIG_VTUNE_DIR 00:15:09.360 #define SPDK_CONFIG_WERROR 1 00:15:09.360 #define SPDK_CONFIG_WPDK_DIR 00:15:09.360 #define SPDK_CONFIG_XNVME 1 00:15:09.360 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:09.360 14:20:34 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.360 14:20:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.360 14:20:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.360 14:20:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.360 14:20:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.360 14:20:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.360 14:20:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.360 14:20:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.360 14:20:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:09.360 14:20:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:09.360 14:20:34 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:09.360 14:20:34 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:09.361 14:20:34 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 71229 ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 71229 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Et26Ez 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Et26Ez/tests/xnvme /tmp/spdk.Et26Ez 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974016000 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5594165248 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974016000 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5594165248 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97951244288 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1751535616 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:09.362 * Looking for test storage... 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974016000 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:09.362 14:20:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.363 14:20:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.623 --rc genhtml_branch_coverage=1 00:15:09.623 --rc genhtml_function_coverage=1 00:15:09.623 --rc genhtml_legend=1 00:15:09.623 --rc geninfo_all_blocks=1 00:15:09.623 --rc geninfo_unexecuted_blocks=1 00:15:09.623 00:15:09.623 ' 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.623 --rc genhtml_branch_coverage=1 00:15:09.623 --rc genhtml_function_coverage=1 00:15:09.623 --rc genhtml_legend=1 00:15:09.623 --rc geninfo_all_blocks=1 00:15:09.623 --rc geninfo_unexecuted_blocks=1 00:15:09.623 00:15:09.623 ' 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.623 --rc genhtml_branch_coverage=1 00:15:09.623 --rc genhtml_function_coverage=1 00:15:09.623 --rc genhtml_legend=1 00:15:09.623 --rc geninfo_all_blocks=1 00:15:09.623 --rc geninfo_unexecuted_blocks=1 00:15:09.623 00:15:09.623 ' 00:15:09.623 14:20:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.623 --rc genhtml_branch_coverage=1 00:15:09.623 --rc genhtml_function_coverage=1 00:15:09.623 --rc genhtml_legend=1 00:15:09.623 --rc geninfo_all_blocks=1 00:15:09.623 --rc geninfo_unexecuted_blocks=1 00:15:09.623 00:15:09.623 ' 00:15:09.623 14:20:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.623 14:20:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.623 14:20:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.623 14:20:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.623 14:20:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.623 14:20:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:09.623 14:20:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.623 14:20:34 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:09.623 14:20:34 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:09.623 14:20:34 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:09.623 14:20:34 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:09.624 14:20:34 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:10.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:10.451 Waiting for block devices as requested 00:15:10.451 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:10.709 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:10.709 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:10.968 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.244 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:16.244 14:20:40 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:16.503 14:20:41 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:16.503 14:20:41 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:16.762 14:20:41 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:16.762 No valid GPT data, bailing 00:15:16.762 14:20:41 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:16.762 14:20:41 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:16.762 14:20:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:16.762 14:20:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.762 14:20:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.762 14:20:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.762 ************************************ 00:15:16.762 START TEST xnvme_rpc 00:15:16.762 ************************************ 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71624 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71624 00:15:16.762 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71624 ']' 00:15:16.763 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.763 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.763 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.763 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.763 14:20:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.763 [2024-12-10 14:20:41.580838] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:15:16.763 [2024-12-10 14:20:41.581070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71624 ] 00:15:17.022 [2024-12-10 14:20:41.762539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.280 [2024-12-10 14:20:41.891302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 xnvme_bdev 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:18.218 14:20:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:18.218 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71624 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71624 ']' 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71624 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71624 00:15:18.477 killing process with pid 71624 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71624' 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71624 00:15:18.477 14:20:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71624 00:15:21.019 00:15:21.019 real 0m4.174s 00:15:21.019 user 0m4.036s 00:15:21.019 sys 0m0.736s 00:15:21.019 14:20:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.019 ************************************ 00:15:21.019 END TEST xnvme_rpc 00:15:21.019 ************************************ 00:15:21.019 14:20:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.019 14:20:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:21.019 14:20:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.019 14:20:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.019 14:20:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.019 ************************************ 00:15:21.019 START TEST xnvme_bdevperf 00:15:21.019 ************************************ 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.019 14:20:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.019 { 00:15:21.019 "subsystems": [ 00:15:21.019 { 00:15:21.019 "subsystem": "bdev", 00:15:21.019 "config": [ 00:15:21.019 { 00:15:21.019 "params": { 00:15:21.019 "io_mechanism": "libaio", 00:15:21.019 "conserve_cpu": false, 00:15:21.019 "filename": "/dev/nvme0n1", 00:15:21.019 "name": "xnvme_bdev" 00:15:21.019 }, 00:15:21.019 "method": "bdev_xnvme_create" 00:15:21.019 }, 00:15:21.019 { 00:15:21.019 "method": "bdev_wait_for_examine" 00:15:21.019 } 00:15:21.019 ] 00:15:21.019 } 00:15:21.019 ] 00:15:21.019 } 00:15:21.019 [2024-12-10 14:20:45.829095] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:15:21.019 [2024-12-10 14:20:45.829220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:15:21.278 [2024-12-10 14:20:46.015909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.537 [2024-12-10 14:20:46.147039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.796 Running I/O for 5 seconds... 00:15:24.112 45413.00 IOPS, 177.39 MiB/s [2024-12-10T14:20:49.897Z] 43779.50 IOPS, 171.01 MiB/s [2024-12-10T14:20:50.834Z] 43428.33 IOPS, 169.64 MiB/s [2024-12-10T14:20:51.770Z] 43891.50 IOPS, 171.45 MiB/s [2024-12-10T14:20:51.770Z] 44078.40 IOPS, 172.18 MiB/s 00:15:26.936 Latency(us) 00:15:26.936 [2024-12-10T14:20:51.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.936 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:26.936 xnvme_bdev : 5.00 44058.68 172.10 0.00 0.00 1449.38 345.45 4500.67 00:15:26.936 [2024-12-10T14:20:51.770Z] =================================================================================================================== 00:15:26.936 [2024-12-10T14:20:51.770Z] Total : 44058.68 172.10 0.00 0.00 1449.38 345.45 4500.67 00:15:27.873 14:20:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.873 14:20:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:27.873 14:20:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:27.873 14:20:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:27.873 14:20:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.873 { 00:15:27.873 "subsystems": [ 00:15:27.873 { 00:15:27.873 "subsystem": "bdev", 00:15:27.873 "config": [ 00:15:27.873 { 00:15:27.873 "params": { 00:15:27.873 "io_mechanism": "libaio", 00:15:27.873 "conserve_cpu": false, 00:15:27.873 "filename": "/dev/nvme0n1", 00:15:27.873 "name": "xnvme_bdev" 00:15:27.873 }, 00:15:27.873 "method": "bdev_xnvme_create" 00:15:27.873 }, 00:15:27.873 { 00:15:27.873 "method": "bdev_wait_for_examine" 00:15:27.873 } 00:15:27.873 ] 00:15:27.873 } 00:15:27.873 ] 00:15:27.873 } 00:15:28.133 [2024-12-10 14:20:52.749709] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:15:28.133 [2024-12-10 14:20:52.749833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71790 ] 00:15:28.133 [2024-12-10 14:20:52.931038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.393 [2024-12-10 14:20:53.035544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.652 Running I/O for 5 seconds... 00:15:30.968 43567.00 IOPS, 170.18 MiB/s [2024-12-10T14:20:56.739Z] 45064.00 IOPS, 176.03 MiB/s [2024-12-10T14:20:57.676Z] 44522.00 IOPS, 173.91 MiB/s [2024-12-10T14:20:58.665Z] 44597.50 IOPS, 174.21 MiB/s 00:15:33.831 Latency(us) 00:15:33.831 [2024-12-10T14:20:58.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.831 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:33.831 xnvme_bdev : 5.00 44329.13 173.16 0.00 0.00 1440.22 166.97 2987.28 00:15:33.831 [2024-12-10T14:20:58.665Z] =================================================================================================================== 00:15:33.831 [2024-12-10T14:20:58.665Z] Total : 44329.13 173.16 0.00 0.00 1440.22 166.97 2987.28 00:15:34.768 00:15:34.768 real 0m13.746s 00:15:34.768 user 0m5.083s 00:15:34.768 sys 0m6.021s 00:15:34.768 14:20:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.768 14:20:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.768 ************************************ 00:15:34.768 END TEST xnvme_bdevperf 00:15:34.768 ************************************ 00:15:34.768 14:20:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:34.768 14:20:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.768 14:20:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.768 14:20:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.768 ************************************ 00:15:34.768 START TEST xnvme_fio_plugin 00:15:34.768 ************************************ 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.768 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.027 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.027 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.027 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:35.027 { 00:15:35.027 "subsystems": [ 00:15:35.027 { 00:15:35.027 "subsystem": "bdev", 00:15:35.027 "config": [ 00:15:35.027 { 00:15:35.027 "params": { 00:15:35.027 "io_mechanism": "libaio", 00:15:35.027 "conserve_cpu": false, 00:15:35.027 "filename": "/dev/nvme0n1", 00:15:35.027 "name": "xnvme_bdev" 00:15:35.027 }, 00:15:35.027 "method": "bdev_xnvme_create" 00:15:35.027 }, 00:15:35.027 { 00:15:35.027 "method": "bdev_wait_for_examine" 00:15:35.027 } 00:15:35.027 ] 00:15:35.027 } 00:15:35.027 ] 00:15:35.027 } 00:15:35.027 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.027 14:20:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.027 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:35.027 fio-3.35 00:15:35.027 Starting 1 thread 00:15:41.591 00:15:41.591 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71916: Tue Dec 10 14:21:05 2024 00:15:41.591 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(862MiB/5001msec) 00:15:41.591 slat (usec): min=4, max=1177, avg=19.98, stdev=29.64 00:15:41.591 clat (usec): min=68, max=6531, avg=844.04, stdev=505.94 00:15:41.591 lat (usec): min=108, max=6833, avg=864.02, stdev=507.85 00:15:41.591 clat percentiles (usec): 00:15:41.591 | 1.00th=[ 159], 5.00th=[ 237], 10.00th=[ 306], 20.00th=[ 429], 00:15:41.591 | 30.00th=[ 545], 40.00th=[ 668], 50.00th=[ 783], 60.00th=[ 898], 00:15:41.591 | 70.00th=[ 1029], 80.00th=[ 1172], 90.00th=[ 1385], 95.00th=[ 1598], 00:15:41.591 | 99.00th=[ 2737], 99.50th=[ 3392], 99.90th=[ 4621], 99.95th=[ 4948], 00:15:41.591 | 99.99th=[ 5669] 00:15:41.592 bw ( KiB/s): min=141720, max=209792, per=100.00%, avg=178456.00, stdev=25447.68, samples=9 00:15:41.592 iops : min=35430, max=52448, avg=44614.00, stdev=6361.92, samples=9 00:15:41.592 lat (usec) : 100=0.13%, 250=5.90%, 500=19.95%, 750=21.27%, 1000=20.90% 00:15:41.592 lat (msec) : 2=29.60%, 4=2.01%, 10=0.24% 00:15:41.592 cpu : usr=23.14%, sys=58.86%, ctx=67, majf=0, minf=764 00:15:41.592 IO depths : 1=0.2%, 2=1.0%, 4=4.0%, 8=11.5%, 16=26.3%, 32=55.4%, >=64=1.8% 00:15:41.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.592 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:41.592 issued rwts: total=220583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.592 00:15:41.592 Run status group 0 (all jobs): 00:15:41.592 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=862MiB (904MB), run=5001-5001msec 00:15:42.158 ----------------------------------------------------- 00:15:42.158 Suppressions used: 00:15:42.158 count bytes template 00:15:42.158 1 11 /usr/src/fio/parse.c 00:15:42.158 1 8 libtcmalloc_minimal.so 00:15:42.158 1 904 libcrypto.so 00:15:42.158 ----------------------------------------------------- 00:15:42.158 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.158 { 00:15:42.158 "subsystems": [ 00:15:42.158 { 00:15:42.158 "subsystem": "bdev", 00:15:42.158 "config": [ 00:15:42.158 { 00:15:42.158 "params": { 00:15:42.158 "io_mechanism": "libaio", 00:15:42.158 "conserve_cpu": false, 00:15:42.158 "filename": "/dev/nvme0n1", 00:15:42.158 "name": "xnvme_bdev" 00:15:42.158 }, 00:15:42.158 "method": "bdev_xnvme_create" 00:15:42.158 }, 00:15:42.158 { 00:15:42.158 "method": "bdev_wait_for_examine" 00:15:42.158 } 00:15:42.158 ] 00:15:42.158 } 00:15:42.158 ] 00:15:42.158 } 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.158 14:21:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.417 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:42.417 fio-3.35 00:15:42.417 Starting 1 thread 00:15:48.977 00:15:48.977 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72013: Tue Dec 10 14:21:12 2024 00:15:48.977 write: IOPS=55.2k, BW=216MiB/s (226MB/s)(1078MiB/5001msec); 0 zone resets 00:15:48.977 slat (usec): min=4, max=1076, avg=15.88, stdev=31.27 00:15:48.977 clat (usec): min=85, max=3950, avg=699.05, stdev=305.70 00:15:48.977 lat (usec): min=142, max=4047, avg=714.93, stdev=302.94 00:15:48.977 clat percentiles (usec): 00:15:48.977 | 1.00th=[ 172], 5.00th=[ 260], 10.00th=[ 322], 20.00th=[ 424], 00:15:48.977 | 30.00th=[ 510], 40.00th=[ 594], 50.00th=[ 676], 60.00th=[ 758], 00:15:48.977 | 70.00th=[ 848], 80.00th=[ 955], 90.00th=[ 1090], 95.00th=[ 1205], 00:15:48.977 | 99.00th=[ 1450], 99.50th=[ 1631], 99.90th=[ 2376], 99.95th=[ 2704], 00:15:48.977 | 99.99th=[ 3261] 00:15:48.977 bw ( KiB/s): min=205064, max=230168, per=100.00%, avg=221281.78, stdev=7908.35, samples=9 00:15:48.977 iops : min=51266, max=57542, avg=55320.44, stdev=1977.09, samples=9 00:15:48.977 lat (usec) : 100=0.12%, 250=4.25%, 500=24.47%, 750=29.86%, 1000=25.07% 00:15:48.977 lat (msec) : 2=16.01%, 4=0.23% 00:15:48.977 cpu : usr=25.40%, sys=62.34%, ctx=14, majf=0, minf=765 00:15:48.977 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=9.4%, 16=25.1%, 32=59.7%, >=64=1.9% 00:15:48.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.977 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:48.977 issued rwts: total=0,276046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.977 00:15:48.977 Run status group 0 (all jobs): 00:15:48.977 WRITE: bw=216MiB/s (226MB/s), 216MiB/s-216MiB/s (226MB/s-226MB/s), io=1078MiB (1131MB), run=5001-5001msec 00:15:49.546 ----------------------------------------------------- 00:15:49.546 Suppressions used: 00:15:49.546 count bytes template 00:15:49.546 1 11 /usr/src/fio/parse.c 00:15:49.546 1 8 libtcmalloc_minimal.so 00:15:49.546 1 904 libcrypto.so 00:15:49.546 ----------------------------------------------------- 00:15:49.546 00:15:49.546 00:15:49.546 real 0m14.741s 00:15:49.546 user 0m6.030s 00:15:49.546 sys 0m6.864s 00:15:49.546 ************************************ 00:15:49.546 END TEST xnvme_fio_plugin 00:15:49.546 ************************************ 00:15:49.546 14:21:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.546 14:21:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:49.546 14:21:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:49.546 14:21:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:49.546 14:21:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:49.546 14:21:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:49.546 14:21:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:49.546 14:21:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.546 14:21:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.546 ************************************ 00:15:49.546 START TEST xnvme_rpc 00:15:49.546 ************************************ 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72096 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72096 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72096 ']' 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.546 14:21:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.806 [2024-12-10 14:21:14.493849] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:15:49.806 [2024-12-10 14:21:14.494863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72096 ] 00:15:50.065 [2024-12-10 14:21:14.690046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.065 [2024-12-10 14:21:14.795935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.001 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.002 xnvme_bdev 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72096 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72096 ']' 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72096 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72096 00:15:51.261 killing process with pid 72096 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72096' 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72096 00:15:51.261 14:21:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72096 00:15:53.798 00:15:53.798 real 0m3.816s 00:15:53.798 user 0m3.821s 00:15:53.798 sys 0m0.574s 00:15:53.798 14:21:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.798 ************************************ 00:15:53.798 END TEST xnvme_rpc 00:15:53.798 ************************************ 00:15:53.798 14:21:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.798 14:21:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:53.798 14:21:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.798 14:21:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.798 14:21:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.798 ************************************ 00:15:53.798 START TEST xnvme_bdevperf 00:15:53.798 ************************************ 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:53.798 14:21:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:53.798 { 00:15:53.798 "subsystems": [ 00:15:53.798 { 00:15:53.798 "subsystem": "bdev", 00:15:53.798 "config": [ 00:15:53.798 { 00:15:53.798 "params": { 00:15:53.798 "io_mechanism": "libaio", 00:15:53.798 "conserve_cpu": true, 00:15:53.798 "filename": "/dev/nvme0n1", 00:15:53.798 "name": "xnvme_bdev" 00:15:53.798 }, 00:15:53.798 "method": "bdev_xnvme_create" 00:15:53.798 }, 00:15:53.798 { 00:15:53.798 "method": "bdev_wait_for_examine" 00:15:53.798 } 00:15:53.798 ] 00:15:53.798 } 00:15:53.798 ] 00:15:53.798 } 00:15:53.798 [2024-12-10 14:21:18.364100] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:15:53.798 [2024-12-10 14:21:18.364518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72181 ] 00:15:53.798 [2024-12-10 14:21:18.547844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.058 [2024-12-10 14:21:18.654594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.316 Running I/O for 5 seconds... 00:15:56.184 45208.00 IOPS, 176.59 MiB/s [2024-12-10T14:21:22.393Z] 44894.50 IOPS, 175.37 MiB/s [2024-12-10T14:21:23.325Z] 44628.67 IOPS, 174.33 MiB/s [2024-12-10T14:21:24.260Z] 44143.75 IOPS, 172.44 MiB/s [2024-12-10T14:21:24.261Z] 43294.00 IOPS, 169.12 MiB/s 00:15:59.427 Latency(us) 00:15:59.427 [2024-12-10T14:21:24.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.427 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:59.427 xnvme_bdev : 5.00 43270.87 169.03 0.00 0.00 1475.87 404.67 5500.81 00:15:59.427 [2024-12-10T14:21:24.261Z] =================================================================================================================== 00:15:59.427 [2024-12-10T14:21:24.261Z] Total : 43270.87 169.03 0.00 0.00 1475.87 404.67 5500.81 00:16:00.365 14:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:00.365 14:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:00.365 14:21:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:00.365 14:21:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:00.365 14:21:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:00.365 { 00:16:00.365 "subsystems": [ 00:16:00.365 { 00:16:00.365 "subsystem": "bdev", 00:16:00.365 "config": [ 00:16:00.365 { 00:16:00.365 "params": { 00:16:00.365 "io_mechanism": "libaio", 00:16:00.365 "conserve_cpu": true, 00:16:00.365 "filename": "/dev/nvme0n1", 00:16:00.365 "name": "xnvme_bdev" 00:16:00.365 }, 00:16:00.365 "method": "bdev_xnvme_create" 00:16:00.365 }, 00:16:00.365 { 00:16:00.365 "method": "bdev_wait_for_examine" 00:16:00.365 } 00:16:00.365 ] 00:16:00.365 } 00:16:00.365 ] 00:16:00.365 } 00:16:00.624 [2024-12-10 14:21:25.211320] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:16:00.624 [2024-12-10 14:21:25.211439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72262 ] 00:16:00.624 [2024-12-10 14:21:25.396485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.884 [2024-12-10 14:21:25.499289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.143 Running I/O for 5 seconds... 00:16:03.456 46806.00 IOPS, 182.84 MiB/s [2024-12-10T14:21:29.227Z] 44696.00 IOPS, 174.59 MiB/s [2024-12-10T14:21:30.163Z] 42714.00 IOPS, 166.85 MiB/s [2024-12-10T14:21:31.117Z] 42977.00 IOPS, 167.88 MiB/s 00:16:06.283 Latency(us) 00:16:06.283 [2024-12-10T14:21:31.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.283 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:06.283 xnvme_bdev : 5.00 42894.58 167.56 0.00 0.00 1488.63 180.95 7474.79 00:16:06.283 [2024-12-10T14:21:31.117Z] =================================================================================================================== 00:16:06.283 [2024-12-10T14:21:31.117Z] Total : 42894.58 167.56 0.00 0.00 1488.63 180.95 7474.79 00:16:07.677 00:16:07.677 real 0m13.813s 00:16:07.677 user 0m5.109s 00:16:07.677 sys 0m6.324s 00:16:07.677 14:21:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.677 14:21:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:07.677 ************************************ 00:16:07.677 END TEST xnvme_bdevperf 00:16:07.677 ************************************ 00:16:07.677 14:21:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:07.677 14:21:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:07.677 14:21:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.677 14:21:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.677 ************************************ 00:16:07.677 START TEST xnvme_fio_plugin 00:16:07.677 ************************************ 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.677 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.678 14:21:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:07.678 { 00:16:07.678 "subsystems": [ 00:16:07.678 { 00:16:07.678 "subsystem": "bdev", 00:16:07.678 "config": [ 00:16:07.678 { 00:16:07.678 "params": { 00:16:07.678 "io_mechanism": "libaio", 00:16:07.678 "conserve_cpu": true, 00:16:07.678 "filename": "/dev/nvme0n1", 00:16:07.678 "name": "xnvme_bdev" 00:16:07.678 }, 00:16:07.678 "method": "bdev_xnvme_create" 00:16:07.678 }, 00:16:07.678 { 00:16:07.678 "method": "bdev_wait_for_examine" 00:16:07.678 } 00:16:07.678 ] 00:16:07.678 } 00:16:07.678 ] 00:16:07.678 } 00:16:07.678 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:07.678 fio-3.35 00:16:07.678 Starting 1 thread 00:16:14.246 00:16:14.246 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72381: Tue Dec 10 14:21:38 2024 00:16:14.246 read: IOPS=49.2k, BW=192MiB/s (201MB/s)(961MiB/5001msec) 00:16:14.246 slat (usec): min=4, max=1356, avg=17.59, stdev=28.77 00:16:14.246 clat (usec): min=72, max=5913, avg=787.76, stdev=467.28 00:16:14.246 lat (usec): min=137, max=6065, avg=805.35, stdev=469.56 00:16:14.246 clat percentiles (usec): 00:16:14.246 | 1.00th=[ 174], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 437], 00:16:14.246 | 30.00th=[ 537], 40.00th=[ 635], 50.00th=[ 725], 60.00th=[ 816], 00:16:14.246 | 70.00th=[ 922], 80.00th=[ 1045], 90.00th=[ 1221], 95.00th=[ 1450], 00:16:14.246 | 99.00th=[ 2769], 99.50th=[ 3359], 99.90th=[ 4293], 99.95th=[ 4555], 00:16:14.246 | 99.99th=[ 4948] 00:16:14.246 bw ( KiB/s): min=176504, max=216960, per=97.99%, avg=192789.56, stdev=13378.58, samples=9 00:16:14.246 iops : min=44126, max=54240, avg=48197.33, stdev=3344.65, samples=9 00:16:14.246 lat (usec) : 100=0.07%, 250=4.75%, 500=21.33%, 750=26.48%, 1000=24.42% 00:16:14.246 lat (msec) : 2=20.76%, 4=1.97%, 10=0.21% 00:16:14.246 cpu : usr=27.20%, sys=55.36%, ctx=84, majf=0, minf=764 00:16:14.246 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=10.0%, 16=25.3%, 32=58.2%, >=64=1.9% 00:16:14.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.246 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:14.246 issued rwts: total=245980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.246 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.246 00:16:14.246 Run status group 0 (all jobs): 00:16:14.246 READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=961MiB (1008MB), run=5001-5001msec 00:16:15.184 ----------------------------------------------------- 00:16:15.184 Suppressions used: 00:16:15.184 count bytes template 00:16:15.184 1 11 /usr/src/fio/parse.c 00:16:15.184 1 8 libtcmalloc_minimal.so 00:16:15.184 1 904 libcrypto.so 00:16:15.184 ----------------------------------------------------- 00:16:15.184 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:15.184 14:21:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:15.184 { 00:16:15.184 "subsystems": [ 00:16:15.184 { 00:16:15.184 "subsystem": "bdev", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "params": { 00:16:15.184 "io_mechanism": "libaio", 00:16:15.184 "conserve_cpu": true, 00:16:15.184 "filename": "/dev/nvme0n1", 00:16:15.184 "name": "xnvme_bdev" 00:16:15.184 }, 00:16:15.184 "method": "bdev_xnvme_create" 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_wait_for_examine" 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 } 00:16:15.184 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:15.184 fio-3.35 00:16:15.184 Starting 1 thread 00:16:21.750 00:16:21.750 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72484: Tue Dec 10 14:21:45 2024 00:16:21.750 write: IOPS=46.9k, BW=183MiB/s (192MB/s)(915MiB/5001msec); 0 zone resets 00:16:21.750 slat (usec): min=4, max=995, avg=18.65, stdev=29.73 00:16:21.750 clat (usec): min=48, max=9355, avg=813.48, stdev=494.56 00:16:21.750 lat (usec): min=62, max=9360, avg=832.13, stdev=496.76 00:16:21.750 clat percentiles (usec): 00:16:21.750 | 1.00th=[ 176], 5.00th=[ 251], 10.00th=[ 322], 20.00th=[ 441], 00:16:21.750 | 30.00th=[ 545], 40.00th=[ 652], 50.00th=[ 750], 60.00th=[ 857], 00:16:21.750 | 70.00th=[ 963], 80.00th=[ 1090], 90.00th=[ 1270], 95.00th=[ 1500], 00:16:21.750 | 99.00th=[ 2835], 99.50th=[ 3458], 99.90th=[ 4490], 99.95th=[ 5866], 00:16:21.750 | 99.99th=[ 7832] 00:16:21.750 bw ( KiB/s): min=158144, max=208000, per=100.00%, avg=187629.33, stdev=14686.95, samples=9 00:16:21.750 iops : min=39536, max=52000, avg=46907.33, stdev=3671.74, samples=9 00:16:21.750 lat (usec) : 50=0.01%, 100=0.10%, 250=4.78%, 500=20.50%, 750=24.66% 00:16:21.750 lat (usec) : 1000=23.33% 00:16:21.750 lat (msec) : 2=24.32%, 4=2.08%, 10=0.24% 00:16:21.750 cpu : usr=26.10%, sys=56.46%, ctx=38, majf=0, minf=765 00:16:21.750 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=10.7%, 16=25.7%, 32=56.9%, >=64=1.8% 00:16:21.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.750 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:21.750 issued rwts: total=0,234330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.750 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:21.750 00:16:21.750 Run status group 0 (all jobs): 00:16:21.750 WRITE: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=915MiB (960MB), run=5001-5001msec 00:16:22.688 ----------------------------------------------------- 00:16:22.688 Suppressions used: 00:16:22.688 count bytes template 00:16:22.688 1 11 /usr/src/fio/parse.c 00:16:22.688 1 8 libtcmalloc_minimal.so 00:16:22.688 1 904 libcrypto.so 00:16:22.688 ----------------------------------------------------- 00:16:22.688 00:16:22.688 00:16:22.688 real 0m15.170s 00:16:22.688 user 0m6.555s 00:16:22.688 sys 0m6.554s 00:16:22.688 ************************************ 00:16:22.688 END TEST xnvme_fio_plugin 00:16:22.688 ************************************ 00:16:22.688 14:21:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.688 14:21:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:22.688 14:21:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:22.688 14:21:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.688 14:21:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.688 14:21:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.688 ************************************ 00:16:22.688 START TEST xnvme_rpc 00:16:22.688 ************************************ 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72571 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72571 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72571 ']' 00:16:22.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.688 14:21:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.948 [2024-12-10 14:21:47.532140] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:16:22.948 [2024-12-10 14:21:47.532619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72571 ] 00:16:22.948 [2024-12-10 14:21:47.717942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.207 [2024-12-10 14:21:47.849591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 xnvme_bdev 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.145 14:21:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72571 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72571 ']' 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72571 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72571 00:16:24.405 killing process with pid 72571 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72571' 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72571 00:16:24.405 14:21:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72571 00:16:26.939 ************************************ 00:16:26.939 END TEST xnvme_rpc 00:16:26.939 ************************************ 00:16:26.939 00:16:26.939 real 0m4.317s 00:16:26.939 user 0m4.225s 00:16:26.939 sys 0m0.742s 00:16:26.939 14:21:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.939 14:21:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.199 14:21:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:27.199 14:21:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.199 14:21:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.199 14:21:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.199 ************************************ 00:16:27.199 START TEST xnvme_bdevperf 00:16:27.199 ************************************ 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:27.200 14:21:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:27.200 { 00:16:27.200 "subsystems": [ 00:16:27.200 { 00:16:27.200 "subsystem": "bdev", 00:16:27.200 "config": [ 00:16:27.200 { 00:16:27.200 "params": { 00:16:27.200 "io_mechanism": "io_uring", 00:16:27.200 "conserve_cpu": false, 00:16:27.200 "filename": "/dev/nvme0n1", 00:16:27.200 "name": "xnvme_bdev" 00:16:27.200 }, 00:16:27.200 "method": "bdev_xnvme_create" 00:16:27.200 }, 00:16:27.200 { 00:16:27.200 "method": "bdev_wait_for_examine" 00:16:27.200 } 00:16:27.200 ] 00:16:27.200 } 00:16:27.200 ] 00:16:27.200 } 00:16:27.200 [2024-12-10 14:21:51.907867] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:16:27.200 [2024-12-10 14:21:51.908006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:16:27.459 [2024-12-10 14:21:52.075848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.459 [2024-12-10 14:21:52.211183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.026 Running I/O for 5 seconds... 00:16:29.894 38413.00 IOPS, 150.05 MiB/s [2024-12-10T14:21:55.663Z] 36901.00 IOPS, 144.14 MiB/s [2024-12-10T14:21:57.039Z] 36262.33 IOPS, 141.65 MiB/s [2024-12-10T14:21:57.975Z] 35986.00 IOPS, 140.57 MiB/s 00:16:33.141 Latency(us) 00:16:33.141 [2024-12-10T14:21:57.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.141 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:33.142 xnvme_bdev : 5.00 36042.73 140.79 0.00 0.00 1771.03 289.52 10685.79 00:16:33.142 [2024-12-10T14:21:57.976Z] =================================================================================================================== 00:16:33.142 [2024-12-10T14:21:57.976Z] Total : 36042.73 140.79 0.00 0.00 1771.03 289.52 10685.79 00:16:34.079 14:21:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:34.079 14:21:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:34.079 14:21:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:34.079 14:21:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:34.079 14:21:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:34.079 { 00:16:34.079 "subsystems": [ 00:16:34.079 { 00:16:34.079 "subsystem": "bdev", 00:16:34.079 "config": [ 00:16:34.079 { 00:16:34.079 "params": { 00:16:34.079 "io_mechanism": "io_uring", 00:16:34.079 "conserve_cpu": false, 00:16:34.079 "filename": "/dev/nvme0n1", 00:16:34.079 "name": "xnvme_bdev" 00:16:34.079 }, 00:16:34.079 "method": "bdev_xnvme_create" 00:16:34.079 }, 00:16:34.079 { 00:16:34.079 "method": "bdev_wait_for_examine" 00:16:34.079 } 00:16:34.079 ] 00:16:34.079 } 00:16:34.079 ] 00:16:34.079 } 00:16:34.339 [2024-12-10 14:21:58.934179] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:16:34.339 [2024-12-10 14:21:58.934442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72738 ] 00:16:34.339 [2024-12-10 14:21:59.112193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.607 [2024-12-10 14:21:59.246277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.896 Running I/O for 5 seconds... 00:16:37.213 24703.00 IOPS, 96.50 MiB/s [2024-12-10T14:22:02.987Z] 25887.50 IOPS, 101.12 MiB/s [2024-12-10T14:22:03.925Z] 24746.33 IOPS, 96.67 MiB/s [2024-12-10T14:22:04.863Z] 24415.75 IOPS, 95.37 MiB/s [2024-12-10T14:22:04.863Z] 24358.20 IOPS, 95.15 MiB/s 00:16:40.029 Latency(us) 00:16:40.029 [2024-12-10T14:22:04.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.029 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:40.029 xnvme_bdev : 5.01 24343.39 95.09 0.00 0.00 2620.89 1454.16 7737.99 00:16:40.029 [2024-12-10T14:22:04.863Z] =================================================================================================================== 00:16:40.029 [2024-12-10T14:22:04.863Z] Total : 24343.39 95.09 0.00 0.00 2620.89 1454.16 7737.99 00:16:41.407 00:16:41.407 real 0m14.058s 00:16:41.407 user 0m6.753s 00:16:41.407 sys 0m7.067s 00:16:41.407 14:22:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.407 ************************************ 00:16:41.407 END TEST xnvme_bdevperf 00:16:41.407 ************************************ 00:16:41.407 14:22:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.407 14:22:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:41.407 14:22:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.407 14:22:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.407 14:22:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.407 ************************************ 00:16:41.407 START TEST xnvme_fio_plugin 00:16:41.407 ************************************ 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:41.407 14:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:41.407 { 00:16:41.407 "subsystems": [ 00:16:41.407 { 00:16:41.407 "subsystem": "bdev", 00:16:41.407 "config": [ 00:16:41.407 { 00:16:41.407 "params": { 00:16:41.407 "io_mechanism": "io_uring", 00:16:41.407 "conserve_cpu": false, 00:16:41.407 "filename": "/dev/nvme0n1", 00:16:41.407 "name": "xnvme_bdev" 00:16:41.407 }, 00:16:41.407 "method": "bdev_xnvme_create" 00:16:41.407 }, 00:16:41.407 { 00:16:41.407 "method": "bdev_wait_for_examine" 00:16:41.407 } 00:16:41.407 ] 00:16:41.407 } 00:16:41.407 ] 00:16:41.407 } 00:16:41.407 14:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.407 14:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.407 14:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:41.407 14:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.407 14:22:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.407 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:41.407 fio-3.35 00:16:41.407 Starting 1 thread 00:16:47.981 00:16:47.981 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72864: Tue Dec 10 14:22:12 2024 00:16:47.981 read: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(477MiB/5001msec) 00:16:47.981 slat (usec): min=2, max=1524, avg= 7.40, stdev= 5.69 00:16:47.981 clat (usec): min=1049, max=5636, avg=2324.58, stdev=367.76 00:16:47.981 lat (usec): min=1052, max=5664, avg=2331.98, stdev=369.56 00:16:47.981 clat percentiles (usec): 00:16:47.981 | 1.00th=[ 1352], 5.00th=[ 1647], 10.00th=[ 1795], 20.00th=[ 2008], 00:16:47.981 | 30.00th=[ 2180], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2474], 00:16:47.981 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2737], 95.00th=[ 2835], 00:16:47.981 | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3228], 99.95th=[ 5080], 00:16:47.981 | 99.99th=[ 5538] 00:16:47.981 bw ( KiB/s): min=87888, max=117248, per=99.00%, avg=96748.44, stdev=10875.92, samples=9 00:16:47.981 iops : min=21972, max=29312, avg=24187.11, stdev=2718.98, samples=9 00:16:47.981 lat (msec) : 2=19.46%, 4=80.49%, 10=0.05% 00:16:47.981 cpu : usr=35.58%, sys=62.92%, ctx=7, majf=0, minf=762 00:16:47.981 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:47.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.981 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:47.981 issued rwts: total=122176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.981 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.981 00:16:47.981 Run status group 0 (all jobs): 00:16:47.981 READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=477MiB (500MB), run=5001-5001msec 00:16:48.548 ----------------------------------------------------- 00:16:48.548 Suppressions used: 00:16:48.548 count bytes template 00:16:48.548 1 11 /usr/src/fio/parse.c 00:16:48.548 1 8 libtcmalloc_minimal.so 00:16:48.548 1 904 libcrypto.so 00:16:48.548 ----------------------------------------------------- 00:16:48.548 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:48.806 14:22:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.806 { 00:16:48.806 "subsystems": [ 00:16:48.806 { 00:16:48.806 "subsystem": "bdev", 00:16:48.806 "config": [ 00:16:48.806 { 00:16:48.806 "params": { 00:16:48.806 "io_mechanism": "io_uring", 00:16:48.806 "conserve_cpu": false, 00:16:48.806 "filename": "/dev/nvme0n1", 00:16:48.806 "name": "xnvme_bdev" 00:16:48.806 }, 00:16:48.806 "method": "bdev_xnvme_create" 00:16:48.806 }, 00:16:48.806 { 00:16:48.806 "method": "bdev_wait_for_examine" 00:16:48.806 } 00:16:48.806 ] 00:16:48.806 } 00:16:48.806 ] 00:16:48.806 } 00:16:49.065 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:49.065 fio-3.35 00:16:49.065 Starting 1 thread 00:16:55.636 00:16:55.636 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72961: Tue Dec 10 14:22:19 2024 00:16:55.636 write: IOPS=25.4k, BW=99.3MiB/s (104MB/s)(497MiB/5001msec); 0 zone resets 00:16:55.636 slat (nsec): min=2226, max=82851, avg=6881.68, stdev=3845.34 00:16:55.636 clat (usec): min=1040, max=8400, avg=2240.53, stdev=425.43 00:16:55.636 lat (usec): min=1043, max=8430, avg=2247.41, stdev=427.31 00:16:55.636 clat percentiles (usec): 00:16:55.636 | 1.00th=[ 1319], 5.00th=[ 1549], 10.00th=[ 1680], 20.00th=[ 1844], 00:16:55.636 | 30.00th=[ 2008], 40.00th=[ 2147], 50.00th=[ 2278], 60.00th=[ 2376], 00:16:55.636 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2737], 95.00th=[ 2835], 00:16:55.636 | 99.00th=[ 2933], 99.50th=[ 2999], 99.90th=[ 5145], 99.95th=[ 7832], 00:16:55.636 | 99.99th=[ 8291] 00:16:55.636 bw ( KiB/s): min=87040, max=123904, per=98.77%, avg=100408.89, stdev=13481.86, samples=9 00:16:55.636 iops : min=21760, max=30976, avg=25102.22, stdev=3370.46, samples=9 00:16:55.636 lat (msec) : 2=29.50%, 4=70.40%, 10=0.10% 00:16:55.636 cpu : usr=36.52%, sys=62.00%, ctx=14, majf=0, minf=763 00:16:55.636 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:55.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.636 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:55.636 issued rwts: total=0,127104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:55.636 00:16:55.636 Run status group 0 (all jobs): 00:16:55.636 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=497MiB (521MB), run=5001-5001msec 00:16:56.205 ----------------------------------------------------- 00:16:56.205 Suppressions used: 00:16:56.205 count bytes template 00:16:56.205 1 11 /usr/src/fio/parse.c 00:16:56.205 1 8 libtcmalloc_minimal.so 00:16:56.205 1 904 libcrypto.so 00:16:56.205 ----------------------------------------------------- 00:16:56.205 00:16:56.205 00:16:56.205 real 0m14.987s 00:16:56.205 user 0m7.515s 00:16:56.205 sys 0m7.065s 00:16:56.205 ************************************ 00:16:56.205 END TEST xnvme_fio_plugin 00:16:56.205 ************************************ 00:16:56.205 14:22:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.205 14:22:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:56.205 14:22:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:56.205 14:22:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:56.205 14:22:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:56.205 14:22:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:56.205 14:22:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:56.205 14:22:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.205 14:22:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.205 ************************************ 00:16:56.205 START TEST xnvme_rpc 00:16:56.205 ************************************ 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73055 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73055 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73055 ']' 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.205 14:22:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.465 [2024-12-10 14:22:21.118718] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:16:56.465 [2024-12-10 14:22:21.118971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73055 ] 00:16:56.465 [2024-12-10 14:22:21.296688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.724 [2024-12-10 14:22:21.423899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.662 xnvme_bdev 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.662 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73055 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73055 ']' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73055 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73055 00:16:57.922 killing process with pid 73055 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73055' 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73055 00:16:57.922 14:22:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73055 00:17:00.473 ************************************ 00:17:00.473 END TEST xnvme_rpc 00:17:00.473 ************************************ 00:17:00.473 00:17:00.473 real 0m4.200s 00:17:00.473 user 0m4.084s 00:17:00.473 sys 0m0.729s 00:17:00.473 14:22:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.473 14:22:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 14:22:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:00.473 14:22:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.473 14:22:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.473 14:22:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 ************************************ 00:17:00.473 START TEST xnvme_bdevperf 00:17:00.473 ************************************ 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:00.473 14:22:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:00.735 { 00:17:00.735 "subsystems": [ 00:17:00.735 { 00:17:00.735 "subsystem": "bdev", 00:17:00.735 "config": [ 00:17:00.735 { 00:17:00.735 "params": { 00:17:00.735 "io_mechanism": "io_uring", 00:17:00.735 "conserve_cpu": true, 00:17:00.735 "filename": "/dev/nvme0n1", 00:17:00.735 "name": "xnvme_bdev" 00:17:00.735 }, 00:17:00.735 "method": "bdev_xnvme_create" 00:17:00.735 }, 00:17:00.735 { 00:17:00.735 "method": "bdev_wait_for_examine" 00:17:00.735 } 00:17:00.735 ] 00:17:00.735 } 00:17:00.735 ] 00:17:00.735 } 00:17:00.735 [2024-12-10 14:22:25.388972] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:00.735 [2024-12-10 14:22:25.389086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73137 ] 00:17:00.994 [2024-12-10 14:22:25.570436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.994 [2024-12-10 14:22:25.699828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.253 Running I/O for 5 seconds... 00:17:03.569 36544.00 IOPS, 142.75 MiB/s [2024-12-10T14:22:29.384Z] 31648.00 IOPS, 123.62 MiB/s [2024-12-10T14:22:30.340Z] 29973.33 IOPS, 117.08 MiB/s [2024-12-10T14:22:31.278Z] 29264.00 IOPS, 114.31 MiB/s 00:17:06.444 Latency(us) 00:17:06.444 [2024-12-10T14:22:31.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.444 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:06.444 xnvme_bdev : 5.00 30168.04 117.84 0.00 0.00 2115.57 861.97 7843.26 00:17:06.444 [2024-12-10T14:22:31.278Z] =================================================================================================================== 00:17:06.444 [2024-12-10T14:22:31.278Z] Total : 30168.04 117.84 0.00 0.00 2115.57 861.97 7843.26 00:17:07.823 14:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:07.823 14:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:07.823 14:22:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:07.823 14:22:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:07.823 14:22:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:07.823 { 00:17:07.823 "subsystems": [ 00:17:07.823 { 00:17:07.823 "subsystem": "bdev", 00:17:07.823 "config": [ 00:17:07.823 { 00:17:07.823 "params": { 00:17:07.823 "io_mechanism": "io_uring", 00:17:07.823 "conserve_cpu": true, 00:17:07.823 "filename": "/dev/nvme0n1", 00:17:07.823 "name": "xnvme_bdev" 00:17:07.823 }, 00:17:07.823 "method": "bdev_xnvme_create" 00:17:07.823 }, 00:17:07.823 { 00:17:07.823 "method": "bdev_wait_for_examine" 00:17:07.823 } 00:17:07.823 ] 00:17:07.823 } 00:17:07.823 ] 00:17:07.823 } 00:17:07.823 [2024-12-10 14:22:32.345436] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:07.823 [2024-12-10 14:22:32.345553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73219 ] 00:17:07.823 [2024-12-10 14:22:32.521398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.823 [2024-12-10 14:22:32.654264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.390 Running I/O for 5 seconds... 00:17:10.263 28544.00 IOPS, 111.50 MiB/s [2024-12-10T14:22:36.478Z] 31168.00 IOPS, 121.75 MiB/s [2024-12-10T14:22:37.416Z] 31168.00 IOPS, 121.75 MiB/s [2024-12-10T14:22:38.355Z] 30672.00 IOPS, 119.81 MiB/s [2024-12-10T14:22:38.355Z] 30592.00 IOPS, 119.50 MiB/s 00:17:13.521 Latency(us) 00:17:13.521 [2024-12-10T14:22:38.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.521 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:13.521 xnvme_bdev : 5.00 30585.24 119.47 0.00 0.00 2086.78 1098.85 4079.55 00:17:13.521 [2024-12-10T14:22:38.355Z] =================================================================================================================== 00:17:13.521 [2024-12-10T14:22:38.355Z] Total : 30585.24 119.47 0.00 0.00 2086.78 1098.85 4079.55 00:17:14.458 00:17:14.458 real 0m13.938s 00:17:14.458 user 0m8.054s 00:17:14.458 sys 0m5.376s 00:17:14.458 14:22:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.458 14:22:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:14.458 ************************************ 00:17:14.458 END TEST xnvme_bdevperf 00:17:14.458 ************************************ 00:17:14.717 14:22:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:14.717 14:22:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:14.717 14:22:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.717 14:22:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:14.717 ************************************ 00:17:14.717 START TEST xnvme_fio_plugin 00:17:14.717 ************************************ 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:14.717 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:14.718 { 00:17:14.718 "subsystems": [ 00:17:14.718 { 00:17:14.718 "subsystem": "bdev", 00:17:14.718 "config": [ 00:17:14.718 { 00:17:14.718 "params": { 00:17:14.718 "io_mechanism": "io_uring", 00:17:14.718 "conserve_cpu": true, 00:17:14.718 "filename": "/dev/nvme0n1", 00:17:14.718 "name": "xnvme_bdev" 00:17:14.718 }, 00:17:14.718 "method": "bdev_xnvme_create" 00:17:14.718 }, 00:17:14.718 { 00:17:14.718 "method": "bdev_wait_for_examine" 00:17:14.718 } 00:17:14.718 ] 00:17:14.718 } 00:17:14.718 ] 00:17:14.718 } 00:17:14.718 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:14.718 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:14.718 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:14.718 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:14.718 14:22:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.977 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:14.977 fio-3.35 00:17:14.977 Starting 1 thread 00:17:21.554 00:17:21.554 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73342: Tue Dec 10 14:22:45 2024 00:17:21.554 read: IOPS=23.7k, BW=92.4MiB/s (96.9MB/s)(462MiB/5001msec) 00:17:21.554 slat (usec): min=2, max=740, avg= 7.72, stdev= 5.07 00:17:21.554 clat (usec): min=1256, max=3707, avg=2393.32, stdev=367.72 00:17:21.554 lat (usec): min=1259, max=3717, avg=2401.03, stdev=369.36 00:17:21.554 clat percentiles (usec): 00:17:21.554 | 1.00th=[ 1434], 5.00th=[ 1614], 10.00th=[ 1795], 20.00th=[ 2147], 00:17:21.554 | 30.00th=[ 2278], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2540], 00:17:21.554 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2868], 00:17:21.554 | 99.00th=[ 2999], 99.50th=[ 3097], 99.90th=[ 3425], 99.95th=[ 3523], 00:17:21.554 | 99.99th=[ 3654] 00:17:21.554 bw ( KiB/s): min=88320, max=101888, per=99.44%, avg=94120.56, stdev=4854.85, samples=9 00:17:21.554 iops : min=22080, max=25472, avg=23530.11, stdev=1213.75, samples=9 00:17:21.554 lat (msec) : 2=14.93%, 4=85.07% 00:17:21.554 cpu : usr=43.30%, sys=51.26%, ctx=20, majf=0, minf=762 00:17:21.554 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:17:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.554 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:21.554 issued rwts: total=118336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:21.554 00:17:21.554 Run status group 0 (all jobs): 00:17:21.554 READ: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=462MiB (485MB), run=5001-5001msec 00:17:22.124 ----------------------------------------------------- 00:17:22.124 Suppressions used: 00:17:22.124 count bytes template 00:17:22.124 1 11 /usr/src/fio/parse.c 00:17:22.124 1 8 libtcmalloc_minimal.so 00:17:22.124 1 904 libcrypto.so 00:17:22.124 ----------------------------------------------------- 00:17:22.124 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.124 14:22:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.124 { 00:17:22.124 "subsystems": [ 00:17:22.124 { 00:17:22.124 "subsystem": "bdev", 00:17:22.124 "config": [ 00:17:22.124 { 00:17:22.124 "params": { 00:17:22.124 "io_mechanism": "io_uring", 00:17:22.124 "conserve_cpu": true, 00:17:22.124 "filename": "/dev/nvme0n1", 00:17:22.124 "name": "xnvme_bdev" 00:17:22.124 }, 00:17:22.124 "method": "bdev_xnvme_create" 00:17:22.124 }, 00:17:22.124 { 00:17:22.124 "method": "bdev_wait_for_examine" 00:17:22.124 } 00:17:22.124 ] 00:17:22.124 } 00:17:22.124 ] 00:17:22.124 } 00:17:22.384 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:22.384 fio-3.35 00:17:22.384 Starting 1 thread 00:17:28.961 00:17:28.961 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73441: Tue Dec 10 14:22:52 2024 00:17:28.961 write: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(451MiB/5001msec); 0 zone resets 00:17:28.961 slat (nsec): min=2388, max=79509, avg=8026.57, stdev=3973.45 00:17:28.961 clat (usec): min=1351, max=4745, avg=2451.45, stdev=315.28 00:17:28.961 lat (usec): min=1354, max=4774, avg=2459.48, stdev=316.86 00:17:28.961 clat percentiles (usec): 00:17:28.961 | 1.00th=[ 1647], 5.00th=[ 1860], 10.00th=[ 2008], 20.00th=[ 2212], 00:17:28.961 | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:17:28.961 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2900], 00:17:28.961 | 99.00th=[ 2999], 99.50th=[ 3064], 99.90th=[ 4113], 99.95th=[ 4359], 00:17:28.961 | 99.99th=[ 4621] 00:17:28.961 bw ( KiB/s): min=86016, max=102195, per=100.00%, avg=92592.33, stdev=5876.10, samples=9 00:17:28.961 iops : min=21504, max=25548, avg=23148.00, stdev=1468.87, samples=9 00:17:28.961 lat (msec) : 2=9.57%, 4=90.32%, 10=0.11% 00:17:28.961 cpu : usr=49.00%, sys=46.24%, ctx=12, majf=0, minf=763 00:17:28.961 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:28.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.961 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:28.961 issued rwts: total=0,115456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.961 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:28.961 00:17:28.961 Run status group 0 (all jobs): 00:17:28.961 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=451MiB (473MB), run=5001-5001msec 00:17:29.530 ----------------------------------------------------- 00:17:29.530 Suppressions used: 00:17:29.530 count bytes template 00:17:29.530 1 11 /usr/src/fio/parse.c 00:17:29.530 1 8 libtcmalloc_minimal.so 00:17:29.530 1 904 libcrypto.so 00:17:29.530 ----------------------------------------------------- 00:17:29.530 00:17:29.530 ************************************ 00:17:29.530 END TEST xnvme_fio_plugin 00:17:29.530 ************************************ 00:17:29.530 00:17:29.530 real 0m14.996s 00:17:29.530 user 0m8.560s 00:17:29.530 sys 0m5.683s 00:17:29.530 14:22:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.530 14:22:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:29.790 14:22:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:29.790 14:22:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.790 14:22:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.790 14:22:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:29.790 ************************************ 00:17:29.790 START TEST xnvme_rpc 00:17:29.790 ************************************ 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73527 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73527 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73527 ']' 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.790 14:22:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.790 [2024-12-10 14:22:54.495046] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:29.790 [2024-12-10 14:22:54.495160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:17:30.050 [2024-12-10 14:22:54.673757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.050 [2024-12-10 14:22:54.813477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.430 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.430 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:31.430 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:17:31.430 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 xnvme_bdev 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73527 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73527 ']' 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73527 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73527 00:17:31.431 killing process with pid 73527 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73527' 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73527 00:17:31.431 14:22:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73527 00:17:33.970 ************************************ 00:17:33.970 END TEST xnvme_rpc 00:17:33.970 ************************************ 00:17:33.970 00:17:33.970 real 0m4.201s 00:17:33.970 user 0m4.090s 00:17:33.970 sys 0m0.725s 00:17:33.970 14:22:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.970 14:22:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.970 14:22:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:33.970 14:22:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:33.970 14:22:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.970 14:22:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.970 ************************************ 00:17:33.970 START TEST xnvme_bdevperf 00:17:33.970 ************************************ 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:33.970 14:22:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:33.970 { 00:17:33.970 "subsystems": [ 00:17:33.970 { 00:17:33.970 "subsystem": "bdev", 00:17:33.970 "config": [ 00:17:33.970 { 00:17:33.970 "params": { 00:17:33.970 "io_mechanism": "io_uring_cmd", 00:17:33.970 "conserve_cpu": false, 00:17:33.970 "filename": "/dev/ng0n1", 00:17:33.970 "name": "xnvme_bdev" 00:17:33.970 }, 00:17:33.970 "method": "bdev_xnvme_create" 00:17:33.970 }, 00:17:33.970 { 00:17:33.970 "method": "bdev_wait_for_examine" 00:17:33.970 } 00:17:33.970 ] 00:17:33.970 } 00:17:33.970 ] 00:17:33.970 } 00:17:33.970 [2024-12-10 14:22:58.768107] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:33.970 [2024-12-10 14:22:58.768233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73612 ] 00:17:34.239 [2024-12-10 14:22:58.953621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.544 [2024-12-10 14:22:59.092603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.822 Running I/O for 5 seconds... 00:17:36.700 32576.00 IOPS, 127.25 MiB/s [2024-12-10T14:23:02.911Z] 32512.00 IOPS, 127.00 MiB/s [2024-12-10T14:23:03.849Z] 32618.67 IOPS, 127.42 MiB/s [2024-12-10T14:23:04.785Z] 33376.00 IOPS, 130.38 MiB/s 00:17:39.951 Latency(us) 00:17:39.951 [2024-12-10T14:23:04.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.951 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:39.951 xnvme_bdev : 5.00 33327.35 130.18 0.00 0.00 1915.32 921.19 6158.80 00:17:39.951 [2024-12-10T14:23:04.785Z] =================================================================================================================== 00:17:39.951 [2024-12-10T14:23:04.785Z] Total : 33327.35 130.18 0.00 0.00 1915.32 921.19 6158.80 00:17:40.887 14:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:40.887 14:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:40.887 14:23:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:40.887 14:23:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:40.887 14:23:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:41.146 { 00:17:41.146 "subsystems": [ 00:17:41.146 { 00:17:41.146 "subsystem": "bdev", 00:17:41.146 "config": [ 00:17:41.146 { 00:17:41.146 "params": { 00:17:41.146 "io_mechanism": "io_uring_cmd", 00:17:41.146 "conserve_cpu": false, 00:17:41.146 "filename": "/dev/ng0n1", 00:17:41.146 "name": "xnvme_bdev" 00:17:41.146 }, 00:17:41.146 "method": "bdev_xnvme_create" 00:17:41.146 }, 00:17:41.146 { 00:17:41.146 "method": "bdev_wait_for_examine" 00:17:41.146 } 00:17:41.146 ] 00:17:41.146 } 00:17:41.146 ] 00:17:41.146 } 00:17:41.146 [2024-12-10 14:23:05.758563] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:41.146 [2024-12-10 14:23:05.759343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73693 ] 00:17:41.146 [2024-12-10 14:23:05.965696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.405 [2024-12-10 14:23:06.098100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.663 Running I/O for 5 seconds... 00:17:43.982 24256.00 IOPS, 94.75 MiB/s [2024-12-10T14:23:09.755Z] 23360.00 IOPS, 91.25 MiB/s [2024-12-10T14:23:10.693Z] 23338.67 IOPS, 91.17 MiB/s [2024-12-10T14:23:11.629Z] 23792.00 IOPS, 92.94 MiB/s 00:17:46.795 Latency(us) 00:17:46.795 [2024-12-10T14:23:11.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.795 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:46.795 xnvme_bdev : 5.00 24195.35 94.51 0.00 0.00 2636.42 1158.07 7948.54 00:17:46.795 [2024-12-10T14:23:11.629Z] =================================================================================================================== 00:17:46.795 [2024-12-10T14:23:11.629Z] Total : 24195.35 94.51 0.00 0.00 2636.42 1158.07 7948.54 00:17:48.174 14:23:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:48.174 14:23:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:48.174 14:23:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:48.174 14:23:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:48.174 14:23:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:48.174 { 00:17:48.174 "subsystems": [ 00:17:48.174 { 00:17:48.174 "subsystem": "bdev", 00:17:48.174 "config": [ 00:17:48.174 { 00:17:48.174 "params": { 00:17:48.174 "io_mechanism": "io_uring_cmd", 00:17:48.174 "conserve_cpu": false, 00:17:48.174 "filename": "/dev/ng0n1", 00:17:48.174 "name": "xnvme_bdev" 00:17:48.174 }, 00:17:48.174 "method": "bdev_xnvme_create" 00:17:48.174 }, 00:17:48.174 { 00:17:48.174 "method": "bdev_wait_for_examine" 00:17:48.174 } 00:17:48.174 ] 00:17:48.174 } 00:17:48.174 ] 00:17:48.174 } 00:17:48.174 [2024-12-10 14:23:12.772210] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:48.174 [2024-12-10 14:23:12.772338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73776 ] 00:17:48.174 [2024-12-10 14:23:12.952863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.433 [2024-12-10 14:23:13.080614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.692 Running I/O for 5 seconds... 00:17:51.005 72192.00 IOPS, 282.00 MiB/s [2024-12-10T14:23:16.775Z] 72192.00 IOPS, 282.00 MiB/s [2024-12-10T14:23:17.712Z] 72426.67 IOPS, 282.92 MiB/s [2024-12-10T14:23:18.650Z] 72224.00 IOPS, 282.12 MiB/s [2024-12-10T14:23:18.650Z] 72256.00 IOPS, 282.25 MiB/s 00:17:53.816 Latency(us) 00:17:53.816 [2024-12-10T14:23:18.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.816 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:53.816 xnvme_bdev : 5.00 72239.62 282.19 0.00 0.00 883.29 654.70 2671.45 00:17:53.816 [2024-12-10T14:23:18.650Z] =================================================================================================================== 00:17:53.816 [2024-12-10T14:23:18.650Z] Total : 72239.62 282.19 0.00 0.00 883.29 654.70 2671.45 00:17:55.195 14:23:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:55.195 14:23:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:55.195 14:23:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:55.195 14:23:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:55.195 14:23:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:55.195 { 00:17:55.195 "subsystems": [ 00:17:55.195 { 00:17:55.195 "subsystem": "bdev", 00:17:55.195 "config": [ 00:17:55.195 { 00:17:55.195 "params": { 00:17:55.195 "io_mechanism": "io_uring_cmd", 00:17:55.195 "conserve_cpu": false, 00:17:55.195 "filename": "/dev/ng0n1", 00:17:55.195 "name": "xnvme_bdev" 00:17:55.195 }, 00:17:55.195 "method": "bdev_xnvme_create" 00:17:55.195 }, 00:17:55.195 { 00:17:55.195 "method": "bdev_wait_for_examine" 00:17:55.195 } 00:17:55.195 ] 00:17:55.195 } 00:17:55.195 ] 00:17:55.195 } 00:17:55.195 [2024-12-10 14:23:19.759274] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:17:55.195 [2024-12-10 14:23:19.759378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73853 ] 00:17:55.195 [2024-12-10 14:23:19.937231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.454 [2024-12-10 14:23:20.079695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.713 Running I/O for 5 seconds... 00:17:58.058 34004.00 IOPS, 132.83 MiB/s [2024-12-10T14:23:23.831Z] 40125.00 IOPS, 156.74 MiB/s [2024-12-10T14:23:24.763Z] 45206.33 IOPS, 176.59 MiB/s [2024-12-10T14:23:25.698Z] 46926.25 IOPS, 183.31 MiB/s [2024-12-10T14:23:25.698Z] 49241.20 IOPS, 192.35 MiB/s 00:18:00.864 Latency(us) 00:18:00.864 [2024-12-10T14:23:25.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.864 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:00.864 xnvme_bdev : 5.00 49231.02 192.31 0.00 0.00 1296.12 120.91 10475.23 00:18:00.864 [2024-12-10T14:23:25.698Z] =================================================================================================================== 00:18:00.864 [2024-12-10T14:23:25.698Z] Total : 49231.02 192.31 0.00 0.00 1296.12 120.91 10475.23 00:18:02.240 00:18:02.240 real 0m27.993s 00:18:02.240 user 0m14.478s 00:18:02.240 sys 0m13.090s 00:18:02.240 14:23:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.240 14:23:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:02.240 ************************************ 00:18:02.240 END TEST xnvme_bdevperf 00:18:02.240 ************************************ 00:18:02.240 14:23:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:02.240 14:23:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.240 14:23:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.240 14:23:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.240 ************************************ 00:18:02.240 START TEST xnvme_fio_plugin 00:18:02.240 ************************************ 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.240 14:23:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:02.240 { 00:18:02.240 "subsystems": [ 00:18:02.240 { 00:18:02.240 "subsystem": "bdev", 00:18:02.240 "config": [ 00:18:02.240 { 00:18:02.240 "params": { 00:18:02.240 "io_mechanism": "io_uring_cmd", 00:18:02.240 "conserve_cpu": false, 00:18:02.240 "filename": "/dev/ng0n1", 00:18:02.240 "name": "xnvme_bdev" 00:18:02.240 }, 00:18:02.240 "method": "bdev_xnvme_create" 00:18:02.240 }, 00:18:02.240 { 00:18:02.240 "method": "bdev_wait_for_examine" 00:18:02.240 } 00:18:02.240 ] 00:18:02.240 } 00:18:02.240 ] 00:18:02.240 } 00:18:02.240 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:02.240 fio-3.35 00:18:02.240 Starting 1 thread 00:18:08.813 00:18:08.813 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73977: Tue Dec 10 14:23:32 2024 00:18:08.813 read: IOPS=25.4k, BW=99.2MiB/s (104MB/s)(496MiB/5001msec) 00:18:08.813 slat (usec): min=2, max=140, avg= 7.34, stdev= 4.17 00:18:08.813 clat (usec): min=951, max=8802, avg=2226.94, stdev=476.10 00:18:08.813 lat (usec): min=954, max=8831, avg=2234.29, stdev=478.20 00:18:08.813 clat percentiles (usec): 00:18:08.813 | 1.00th=[ 1172], 5.00th=[ 1319], 10.00th=[ 1450], 20.00th=[ 1795], 00:18:08.813 | 30.00th=[ 2114], 40.00th=[ 2245], 50.00th=[ 2343], 60.00th=[ 2442], 00:18:08.813 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:18:08.813 | 99.00th=[ 2868], 99.50th=[ 2933], 99.90th=[ 3130], 99.95th=[ 8225], 00:18:08.813 | 99.99th=[ 8717] 00:18:08.813 bw ( KiB/s): min=88576, max=116736, per=99.76%, avg=101319.11, stdev=9753.04, samples=9 00:18:08.813 iops : min=22144, max=29184, avg=25329.78, stdev=2438.26, samples=9 00:18:08.813 lat (usec) : 1000=0.03% 00:18:08.813 lat (msec) : 2=25.99%, 4=73.94%, 10=0.05% 00:18:08.813 cpu : usr=39.20%, sys=59.08%, ctx=11, majf=0, minf=762 00:18:08.813 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:08.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.814 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:08.814 issued rwts: total=126976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.814 00:18:08.814 Run status group 0 (all jobs): 00:18:08.814 READ: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=496MiB (520MB), run=5001-5001msec 00:18:09.383 ----------------------------------------------------- 00:18:09.383 Suppressions used: 00:18:09.383 count bytes template 00:18:09.383 1 11 /usr/src/fio/parse.c 00:18:09.383 1 8 libtcmalloc_minimal.so 00:18:09.383 1 904 libcrypto.so 00:18:09.383 ----------------------------------------------------- 00:18:09.383 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.383 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.643 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.643 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.643 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:09.643 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.643 14:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:09.643 { 00:18:09.643 "subsystems": [ 00:18:09.643 { 00:18:09.643 "subsystem": "bdev", 00:18:09.643 "config": [ 00:18:09.643 { 00:18:09.643 "params": { 00:18:09.643 "io_mechanism": "io_uring_cmd", 00:18:09.643 "conserve_cpu": false, 00:18:09.643 "filename": "/dev/ng0n1", 00:18:09.643 "name": "xnvme_bdev" 00:18:09.643 }, 00:18:09.643 "method": "bdev_xnvme_create" 00:18:09.643 }, 00:18:09.643 { 00:18:09.643 "method": "bdev_wait_for_examine" 00:18:09.643 } 00:18:09.643 ] 00:18:09.643 } 00:18:09.643 ] 00:18:09.643 } 00:18:09.643 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:09.643 fio-3.35 00:18:09.643 Starting 1 thread 00:18:16.223 00:18:16.223 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74074: Tue Dec 10 14:23:40 2024 00:18:16.223 write: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(452MiB/5001msec); 0 zone resets 00:18:16.223 slat (usec): min=5, max=1910, avg= 9.01, stdev= 7.72 00:18:16.223 clat (usec): min=886, max=5170, avg=2408.66, stdev=297.93 00:18:16.223 lat (usec): min=895, max=5374, avg=2417.67, stdev=298.61 00:18:16.223 clat percentiles (usec): 00:18:16.223 | 1.00th=[ 1516], 5.00th=[ 1844], 10.00th=[ 2057], 20.00th=[ 2212], 00:18:16.223 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2442], 60.00th=[ 2507], 00:18:16.223 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2737], 95.00th=[ 2802], 00:18:16.223 | 99.00th=[ 2999], 99.50th=[ 3261], 99.90th=[ 3621], 99.95th=[ 3752], 00:18:16.223 | 99.99th=[ 4293] 00:18:16.223 bw ( KiB/s): min=86016, max=103168, per=99.91%, avg=92392.00, stdev=5020.10, samples=9 00:18:16.223 iops : min=21504, max=25792, avg=23097.89, stdev=1255.06, samples=9 00:18:16.223 lat (usec) : 1000=0.01% 00:18:16.223 lat (msec) : 2=7.94%, 4=92.03%, 10=0.01% 00:18:16.223 cpu : usr=42.08%, sys=56.24%, ctx=10, majf=0, minf=763 00:18:16.223 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:16.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.223 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:16.223 issued rwts: total=0,115616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:16.223 00:18:16.223 Run status group 0 (all jobs): 00:18:16.223 WRITE: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=452MiB (474MB), run=5001-5001msec 00:18:17.163 ----------------------------------------------------- 00:18:17.163 Suppressions used: 00:18:17.163 count bytes template 00:18:17.163 1 11 /usr/src/fio/parse.c 00:18:17.163 1 8 libtcmalloc_minimal.so 00:18:17.163 1 904 libcrypto.so 00:18:17.163 ----------------------------------------------------- 00:18:17.163 00:18:17.163 00:18:17.163 real 0m14.938s 00:18:17.163 user 0m8.051s 00:18:17.163 sys 0m6.483s 00:18:17.163 14:23:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.163 14:23:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:17.163 ************************************ 00:18:17.163 END TEST xnvme_fio_plugin 00:18:17.163 ************************************ 00:18:17.163 14:23:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:17.163 14:23:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:17.163 14:23:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:17.163 14:23:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:17.163 14:23:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:17.163 14:23:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.163 14:23:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:17.163 ************************************ 00:18:17.163 START TEST xnvme_rpc 00:18:17.163 ************************************ 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=74165 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 74165 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 74165 ']' 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.163 14:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.163 [2024-12-10 14:23:41.862776] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:18:17.163 [2024-12-10 14:23:41.862884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74165 ] 00:18:17.423 [2024-12-10 14:23:42.043702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.423 [2024-12-10 14:23:42.176820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.362 xnvme_bdev 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.362 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 74165 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 74165 ']' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 74165 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74165 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.623 killing process with pid 74165 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74165' 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 74165 00:18:18.623 14:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 74165 00:18:21.161 00:18:21.161 real 0m4.168s 00:18:21.161 user 0m4.035s 00:18:21.161 sys 0m0.726s 00:18:21.161 14:23:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.161 14:23:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.161 ************************************ 00:18:21.161 END TEST xnvme_rpc 00:18:21.161 ************************************ 00:18:21.161 14:23:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:21.161 14:23:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:21.161 14:23:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.161 14:23:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:21.421 ************************************ 00:18:21.421 START TEST xnvme_bdevperf 00:18:21.421 ************************************ 00:18:21.421 14:23:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:21.421 14:23:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:21.421 14:23:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:21.421 { 00:18:21.421 "subsystems": [ 00:18:21.421 { 00:18:21.421 "subsystem": "bdev", 00:18:21.421 "config": [ 00:18:21.421 { 00:18:21.421 "params": { 00:18:21.421 "io_mechanism": "io_uring_cmd", 00:18:21.421 "conserve_cpu": true, 00:18:21.421 "filename": "/dev/ng0n1", 00:18:21.421 "name": "xnvme_bdev" 00:18:21.421 }, 00:18:21.421 "method": "bdev_xnvme_create" 00:18:21.421 }, 00:18:21.421 { 00:18:21.421 "method": "bdev_wait_for_examine" 00:18:21.421 } 00:18:21.421 ] 00:18:21.421 } 00:18:21.421 ] 00:18:21.421 } 00:18:21.421 [2024-12-10 14:23:46.106292] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:18:21.421 [2024-12-10 14:23:46.106416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74249 ] 00:18:21.681 [2024-12-10 14:23:46.289177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.681 [2024-12-10 14:23:46.415065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.251 Running I/O for 5 seconds... 00:18:24.129 26048.00 IOPS, 101.75 MiB/s [2024-12-10T14:23:49.902Z] 25792.00 IOPS, 100.75 MiB/s [2024-12-10T14:23:50.841Z] 24789.33 IOPS, 96.83 MiB/s [2024-12-10T14:23:52.250Z] 24399.50 IOPS, 95.31 MiB/s [2024-12-10T14:23:52.250Z] 26021.60 IOPS, 101.65 MiB/s 00:18:27.416 Latency(us) 00:18:27.416 [2024-12-10T14:23:52.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.416 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:27.416 xnvme_bdev : 5.01 25999.71 101.56 0.00 0.00 2453.83 914.61 10054.12 00:18:27.416 [2024-12-10T14:23:52.250Z] =================================================================================================================== 00:18:27.416 [2024-12-10T14:23:52.250Z] Total : 25999.71 101.56 0.00 0.00 2453.83 914.61 10054.12 00:18:28.354 14:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:28.354 14:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:28.354 14:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:28.354 14:23:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:28.354 14:23:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:28.354 { 00:18:28.354 "subsystems": [ 00:18:28.354 { 00:18:28.354 "subsystem": "bdev", 00:18:28.354 "config": [ 00:18:28.354 { 00:18:28.354 "params": { 00:18:28.354 "io_mechanism": "io_uring_cmd", 00:18:28.354 "conserve_cpu": true, 00:18:28.354 "filename": "/dev/ng0n1", 00:18:28.354 "name": "xnvme_bdev" 00:18:28.354 }, 00:18:28.354 "method": "bdev_xnvme_create" 00:18:28.354 }, 00:18:28.354 { 00:18:28.354 "method": "bdev_wait_for_examine" 00:18:28.354 } 00:18:28.354 ] 00:18:28.354 } 00:18:28.354 ] 00:18:28.354 } 00:18:28.354 [2024-12-10 14:23:53.109549] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:18:28.354 [2024-12-10 14:23:53.109775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74330 ] 00:18:28.613 [2024-12-10 14:23:53.297510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.613 [2024-12-10 14:23:53.428139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.182 Running I/O for 5 seconds... 00:18:31.060 30848.00 IOPS, 120.50 MiB/s [2024-12-10T14:23:56.835Z] 26624.00 IOPS, 104.00 MiB/s [2024-12-10T14:23:58.216Z] 25408.00 IOPS, 99.25 MiB/s [2024-12-10T14:23:59.155Z] 24752.00 IOPS, 96.69 MiB/s [2024-12-10T14:23:59.155Z] 26816.00 IOPS, 104.75 MiB/s 00:18:34.321 Latency(us) 00:18:34.321 [2024-12-10T14:23:59.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.321 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:34.321 xnvme_bdev : 5.01 26781.60 104.62 0.00 0.00 2382.44 523.10 8474.94 00:18:34.321 [2024-12-10T14:23:59.155Z] =================================================================================================================== 00:18:34.321 [2024-12-10T14:23:59.155Z] Total : 26781.60 104.62 0.00 0.00 2382.44 523.10 8474.94 00:18:35.261 14:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:35.261 14:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:35.261 14:24:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:35.261 14:24:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:35.261 14:24:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:35.261 { 00:18:35.261 "subsystems": [ 00:18:35.261 { 00:18:35.261 "subsystem": "bdev", 00:18:35.261 "config": [ 00:18:35.261 { 00:18:35.261 "params": { 00:18:35.261 "io_mechanism": "io_uring_cmd", 00:18:35.261 "conserve_cpu": true, 00:18:35.261 "filename": "/dev/ng0n1", 00:18:35.261 "name": "xnvme_bdev" 00:18:35.261 }, 00:18:35.261 "method": "bdev_xnvme_create" 00:18:35.261 }, 00:18:35.261 { 00:18:35.261 "method": "bdev_wait_for_examine" 00:18:35.261 } 00:18:35.261 ] 00:18:35.261 } 00:18:35.261 ] 00:18:35.261 } 00:18:35.520 [2024-12-10 14:24:00.109321] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:18:35.521 [2024-12-10 14:24:00.109434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74408 ] 00:18:35.521 [2024-12-10 14:24:00.286540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.780 [2024-12-10 14:24:00.432236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.040 Running I/O for 5 seconds... 00:18:38.358 71936.00 IOPS, 281.00 MiB/s [2024-12-10T14:24:04.130Z] 71776.00 IOPS, 280.38 MiB/s [2024-12-10T14:24:05.068Z] 71125.33 IOPS, 277.83 MiB/s [2024-12-10T14:24:06.005Z] 71136.00 IOPS, 277.88 MiB/s [2024-12-10T14:24:06.005Z] 71283.20 IOPS, 278.45 MiB/s 00:18:41.171 Latency(us) 00:18:41.171 [2024-12-10T14:24:06.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.171 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:41.171 xnvme_bdev : 5.00 71269.70 278.40 0.00 0.00 895.32 631.67 6106.17 00:18:41.171 [2024-12-10T14:24:06.005Z] =================================================================================================================== 00:18:41.171 [2024-12-10T14:24:06.005Z] Total : 71269.70 278.40 0.00 0.00 895.32 631.67 6106.17 00:18:42.552 14:24:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:42.552 14:24:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:42.552 14:24:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:42.552 14:24:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:42.552 14:24:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:42.552 { 00:18:42.552 "subsystems": [ 00:18:42.552 { 00:18:42.552 "subsystem": "bdev", 00:18:42.552 "config": [ 00:18:42.552 { 00:18:42.552 "params": { 00:18:42.552 "io_mechanism": "io_uring_cmd", 00:18:42.552 "conserve_cpu": true, 00:18:42.552 "filename": "/dev/ng0n1", 00:18:42.552 "name": "xnvme_bdev" 00:18:42.552 }, 00:18:42.552 "method": "bdev_xnvme_create" 00:18:42.552 }, 00:18:42.552 { 00:18:42.552 "method": "bdev_wait_for_examine" 00:18:42.552 } 00:18:42.552 ] 00:18:42.552 } 00:18:42.552 ] 00:18:42.552 } 00:18:42.552 [2024-12-10 14:24:07.093685] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:18:42.552 [2024-12-10 14:24:07.093796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74489 ] 00:18:42.552 [2024-12-10 14:24:07.274819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.812 [2024-12-10 14:24:07.406433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.072 Running I/O for 5 seconds... 00:18:45.378 41369.00 IOPS, 161.60 MiB/s [2024-12-10T14:24:11.142Z] 46719.00 IOPS, 182.50 MiB/s [2024-12-10T14:24:12.073Z] 49136.00 IOPS, 191.94 MiB/s [2024-12-10T14:24:13.006Z] 50524.25 IOPS, 197.36 MiB/s [2024-12-10T14:24:13.006Z] 51610.20 IOPS, 201.60 MiB/s 00:18:48.172 Latency(us) 00:18:48.172 [2024-12-10T14:24:13.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.172 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:48.172 xnvme_bdev : 5.00 51588.39 201.52 0.00 0.00 1236.84 81.02 12844.00 00:18:48.172 [2024-12-10T14:24:13.006Z] =================================================================================================================== 00:18:48.172 [2024-12-10T14:24:13.006Z] Total : 51588.39 201.52 0.00 0.00 1236.84 81.02 12844.00 00:18:49.551 00:18:49.551 real 0m27.974s 00:18:49.551 user 0m18.878s 00:18:49.551 sys 0m7.828s 00:18:49.551 14:24:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.551 14:24:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.551 ************************************ 00:18:49.551 END TEST xnvme_bdevperf 00:18:49.551 ************************************ 00:18:49.551 14:24:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:49.551 14:24:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.551 14:24:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.551 14:24:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.551 ************************************ 00:18:49.551 START TEST xnvme_fio_plugin 00:18:49.551 ************************************ 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:49.551 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:49.551 { 00:18:49.551 "subsystems": [ 00:18:49.551 { 00:18:49.551 "subsystem": "bdev", 00:18:49.551 "config": [ 00:18:49.552 { 00:18:49.552 "params": { 00:18:49.552 "io_mechanism": "io_uring_cmd", 00:18:49.552 "conserve_cpu": true, 00:18:49.552 "filename": "/dev/ng0n1", 00:18:49.552 "name": "xnvme_bdev" 00:18:49.552 }, 00:18:49.552 "method": "bdev_xnvme_create" 00:18:49.552 }, 00:18:49.552 { 00:18:49.552 "method": "bdev_wait_for_examine" 00:18:49.552 } 00:18:49.552 ] 00:18:49.552 } 00:18:49.552 ] 00:18:49.552 } 00:18:49.552 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:49.552 14:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:49.552 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:49.552 fio-3.35 00:18:49.552 Starting 1 thread 00:18:56.174 00:18:56.174 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74613: Tue Dec 10 14:24:20 2024 00:18:56.174 read: IOPS=24.2k, BW=94.5MiB/s (99.1MB/s)(473MiB/5001msec) 00:18:56.174 slat (nsec): min=2185, max=70011, avg=7853.54, stdev=4007.50 00:18:56.174 clat (usec): min=1077, max=7914, avg=2329.23, stdev=401.42 00:18:56.174 lat (usec): min=1079, max=7942, avg=2337.08, stdev=403.09 00:18:56.174 clat percentiles (usec): 00:18:56.174 | 1.00th=[ 1270], 5.00th=[ 1500], 10.00th=[ 1729], 20.00th=[ 2089], 00:18:56.174 | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2409], 60.00th=[ 2474], 00:18:56.174 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2737], 95.00th=[ 2802], 00:18:56.174 | 99.00th=[ 2900], 99.50th=[ 2933], 99.90th=[ 4752], 99.95th=[ 7308], 00:18:56.174 | 99.99th=[ 7832] 00:18:56.174 bw ( KiB/s): min=88910, max=111104, per=100.00%, avg=97203.33, stdev=9022.04, samples=9 00:18:56.174 iops : min=22227, max=27776, avg=24300.78, stdev=2255.57, samples=9 00:18:56.174 lat (msec) : 2=16.76%, 4=83.11%, 10=0.13% 00:18:56.174 cpu : usr=47.82%, sys=48.32%, ctx=8, majf=0, minf=762 00:18:56.174 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:56.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.174 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:56.174 issued rwts: total=121024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.174 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:56.174 00:18:56.174 Run status group 0 (all jobs): 00:18:56.174 READ: bw=94.5MiB/s (99.1MB/s), 94.5MiB/s-94.5MiB/s (99.1MB/s-99.1MB/s), io=473MiB (496MB), run=5001-5001msec 00:18:56.744 ----------------------------------------------------- 00:18:56.744 Suppressions used: 00:18:56.744 count bytes template 00:18:56.744 1 11 /usr/src/fio/parse.c 00:18:56.744 1 8 libtcmalloc_minimal.so 00:18:56.744 1 904 libcrypto.so 00:18:56.744 ----------------------------------------------------- 00:18:56.744 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.744 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:57.004 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:57.004 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:57.004 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:57.004 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:57.004 14:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:57.004 { 00:18:57.004 "subsystems": [ 00:18:57.004 { 00:18:57.004 "subsystem": "bdev", 00:18:57.004 "config": [ 00:18:57.004 { 00:18:57.004 "params": { 00:18:57.004 "io_mechanism": "io_uring_cmd", 00:18:57.004 "conserve_cpu": true, 00:18:57.004 "filename": "/dev/ng0n1", 00:18:57.004 "name": "xnvme_bdev" 00:18:57.004 }, 00:18:57.004 "method": "bdev_xnvme_create" 00:18:57.004 }, 00:18:57.004 { 00:18:57.004 "method": "bdev_wait_for_examine" 00:18:57.004 } 00:18:57.004 ] 00:18:57.004 } 00:18:57.004 ] 00:18:57.004 } 00:18:57.004 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:57.004 fio-3.35 00:18:57.004 Starting 1 thread 00:19:03.578 00:19:03.578 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74708: Tue Dec 10 14:24:27 2024 00:19:03.578 write: IOPS=27.0k, BW=106MiB/s (111MB/s)(528MiB/5001msec); 0 zone resets 00:19:03.578 slat (usec): min=2, max=106, avg= 7.21, stdev= 3.98 00:19:03.578 clat (usec): min=134, max=3752, avg=2081.75, stdev=525.01 00:19:03.578 lat (usec): min=139, max=3782, avg=2088.96, stdev=527.37 00:19:03.578 clat percentiles (usec): 00:19:03.578 | 1.00th=[ 1057], 5.00th=[ 1172], 10.00th=[ 1287], 20.00th=[ 1516], 00:19:03.578 | 30.00th=[ 1745], 40.00th=[ 1991], 50.00th=[ 2212], 60.00th=[ 2343], 00:19:03.578 | 70.00th=[ 2442], 80.00th=[ 2573], 90.00th=[ 2704], 95.00th=[ 2802], 00:19:03.578 | 99.00th=[ 2900], 99.50th=[ 2966], 99.90th=[ 3261], 99.95th=[ 3359], 00:19:03.578 | 99.99th=[ 3621] 00:19:03.578 bw ( KiB/s): min=86016, max=143384, per=100.00%, avg=108630.00, stdev=21633.47, samples=9 00:19:03.578 iops : min=21504, max=35844, avg=27157.44, stdev=5408.27, samples=9 00:19:03.578 lat (usec) : 250=0.01%, 1000=0.22% 00:19:03.578 lat (msec) : 2=40.39%, 4=59.39% 00:19:03.578 cpu : usr=50.34%, sys=46.14%, ctx=8, majf=0, minf=763 00:19:03.578 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:03.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.578 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:03.578 issued rwts: total=0,135166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.578 00:19:03.578 Run status group 0 (all jobs): 00:19:03.578 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=528MiB (554MB), run=5001-5001msec 00:19:04.515 ----------------------------------------------------- 00:19:04.515 Suppressions used: 00:19:04.515 count bytes template 00:19:04.515 1 11 /usr/src/fio/parse.c 00:19:04.515 1 8 libtcmalloc_minimal.so 00:19:04.515 1 904 libcrypto.so 00:19:04.515 ----------------------------------------------------- 00:19:04.515 00:19:04.515 00:19:04.515 real 0m15.022s 00:19:04.515 user 0m8.876s 00:19:04.515 sys 0m5.536s 00:19:04.515 14:24:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.515 14:24:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:04.515 ************************************ 00:19:04.515 END TEST xnvme_fio_plugin 00:19:04.515 ************************************ 00:19:04.515 14:24:29 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 74165 00:19:04.515 14:24:29 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74165 ']' 00:19:04.515 14:24:29 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 74165 00:19:04.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74165) - No such process 00:19:04.515 Process with pid 74165 is not found 00:19:04.515 14:24:29 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 74165 is not found' 00:19:04.515 00:19:04.515 real 3m55.423s 00:19:04.515 user 2m9.438s 00:19:04.515 sys 1m30.697s 00:19:04.515 14:24:29 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.515 14:24:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.515 ************************************ 00:19:04.515 END TEST nvme_xnvme 00:19:04.515 ************************************ 00:19:04.515 14:24:29 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:04.515 14:24:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:04.515 14:24:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.515 14:24:29 -- common/autotest_common.sh@10 -- # set +x 00:19:04.515 ************************************ 00:19:04.515 START TEST blockdev_xnvme 00:19:04.515 ************************************ 00:19:04.515 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:04.515 * Looking for test storage... 00:19:04.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.775 14:24:29 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:04.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.775 --rc genhtml_branch_coverage=1 00:19:04.775 --rc genhtml_function_coverage=1 00:19:04.775 --rc genhtml_legend=1 00:19:04.775 --rc geninfo_all_blocks=1 00:19:04.775 --rc geninfo_unexecuted_blocks=1 00:19:04.775 00:19:04.775 ' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:04.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.775 --rc genhtml_branch_coverage=1 00:19:04.775 --rc genhtml_function_coverage=1 00:19:04.775 --rc genhtml_legend=1 00:19:04.775 --rc geninfo_all_blocks=1 00:19:04.775 --rc geninfo_unexecuted_blocks=1 00:19:04.775 00:19:04.775 ' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:04.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.775 --rc genhtml_branch_coverage=1 00:19:04.775 --rc genhtml_function_coverage=1 00:19:04.775 --rc genhtml_legend=1 00:19:04.775 --rc geninfo_all_blocks=1 00:19:04.775 --rc geninfo_unexecuted_blocks=1 00:19:04.775 00:19:04.775 ' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:04.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.775 --rc genhtml_branch_coverage=1 00:19:04.775 --rc genhtml_function_coverage=1 00:19:04.775 --rc genhtml_legend=1 00:19:04.775 --rc geninfo_all_blocks=1 00:19:04.775 --rc geninfo_unexecuted_blocks=1 00:19:04.775 00:19:04.775 ' 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74849 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:04.775 14:24:29 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74849 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74849 ']' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.775 14:24:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.775 [2024-12-10 14:24:29.586588] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:04.775 [2024-12-10 14:24:29.586929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:19:05.035 [2024-12-10 14:24:29.763821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.294 [2024-12-10 14:24:29.895611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.232 14:24:30 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.232 14:24:30 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:06.232 14:24:30 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:06.232 14:24:30 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:06.232 14:24:30 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:06.232 14:24:30 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:06.232 14:24:30 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:06.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.738 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:07.738 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:07.738 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:07.738 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:07.738 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:07.738 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:07.739 nvme0n1 00:19:07.739 nvme0n2 00:19:07.739 nvme0n3 00:19:07.739 nvme1n1 00:19:07.739 nvme2n1 00:19:07.739 nvme3n1 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:07.739 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.739 14:24:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "deb8ae91-3121-4ed9-8153-dd02a63b4fbb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "deb8ae91-3121-4ed9-8153-dd02a63b4fbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7b577269-523c-4884-9500-4601627aa796"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b577269-523c-4884-9500-4601627aa796",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c688632a-f6b7-4673-b1f4-11378fb999ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c688632a-f6b7-4673-b1f4-11378fb999ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "73233923-a9a8-4e13-9d7b-ef75a9532ecd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73233923-a9a8-4e13-9d7b-ef75a9532ecd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b3a06199-08ee-4574-aafd-44d34ae77e4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b3a06199-08ee-4574-aafd-44d34ae77e4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b1c81d08-883a-4c24-8f7a-1898de7e29b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b1c81d08-883a-4c24-8f7a-1898de7e29b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:07.999 14:24:32 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74849 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74849 ']' 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74849 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74849 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.999 killing process with pid 74849 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74849' 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74849 00:19:07.999 14:24:32 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74849 00:19:10.536 14:24:35 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:10.536 14:24:35 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:10.536 14:24:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:10.536 14:24:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.536 14:24:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.536 ************************************ 00:19:10.536 START TEST bdev_hello_world 00:19:10.536 ************************************ 00:19:10.536 14:24:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:10.795 [2024-12-10 14:24:35.405295] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:10.795 [2024-12-10 14:24:35.405421] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75151 ] 00:19:10.795 [2024-12-10 14:24:35.589760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.054 [2024-12-10 14:24:35.719060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.622 [2024-12-10 14:24:36.201256] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:11.622 [2024-12-10 14:24:36.201303] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:11.622 [2024-12-10 14:24:36.201330] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:11.622 [2024-12-10 14:24:36.203742] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:11.622 [2024-12-10 14:24:36.204196] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:11.622 [2024-12-10 14:24:36.204230] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:11.622 [2024-12-10 14:24:36.204484] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:11.622 00:19:11.622 [2024-12-10 14:24:36.204511] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:12.558 00:19:12.558 real 0m2.077s 00:19:12.558 user 0m1.628s 00:19:12.558 sys 0m0.332s 00:19:12.559 14:24:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.559 14:24:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:12.559 ************************************ 00:19:12.559 END TEST bdev_hello_world 00:19:12.559 ************************************ 00:19:12.818 14:24:37 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:12.818 14:24:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.818 14:24:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.818 14:24:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.818 ************************************ 00:19:12.818 START TEST bdev_bounds 00:19:12.818 ************************************ 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75194 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75194' 00:19:12.818 Process bdevio pid: 75194 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75194 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 75194 ']' 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.818 14:24:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:12.818 [2024-12-10 14:24:37.570994] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:12.818 [2024-12-10 14:24:37.571128] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75194 ] 00:19:13.077 [2024-12-10 14:24:37.758051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.078 [2024-12-10 14:24:37.889743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.078 [2024-12-10 14:24:37.889879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.078 [2024-12-10 14:24:37.889926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.644 14:24:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.644 14:24:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:13.644 14:24:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:13.902 I/O targets: 00:19:13.902 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:13.902 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:13.902 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:13.902 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:13.902 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:13.902 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:13.903 00:19:13.903 00:19:13.903 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.903 http://cunit.sourceforge.net/ 00:19:13.903 00:19:13.903 00:19:13.903 Suite: bdevio tests on: nvme3n1 00:19:13.903 Test: blockdev write read block ...passed 00:19:13.903 Test: blockdev write zeroes read block ...passed 00:19:13.903 Test: blockdev write zeroes read no split ...passed 00:19:13.903 Test: blockdev write zeroes read split ...passed 00:19:13.903 Test: blockdev write zeroes read split partial ...passed 00:19:13.903 Test: blockdev reset ...passed 00:19:13.903 Test: blockdev write read 8 blocks ...passed 00:19:13.903 Test: blockdev write read size > 128k ...passed 00:19:13.903 Test: blockdev write read invalid size ...passed 00:19:13.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.903 Test: blockdev write read max offset ...passed 00:19:13.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:13.903 Test: blockdev writev readv 8 blocks ...passed 00:19:13.903 Test: blockdev writev readv 30 x 1block ...passed 00:19:13.903 Test: blockdev writev readv block ...passed 00:19:13.903 Test: blockdev writev readv size > 128k ...passed 00:19:13.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:13.903 Test: blockdev comparev and writev ...passed 00:19:13.903 Test: blockdev nvme passthru rw ...passed 00:19:13.903 Test: blockdev nvme passthru vendor specific ...passed 00:19:13.903 Test: blockdev nvme admin passthru ...passed 00:19:13.903 Test: blockdev copy ...passed 00:19:13.903 Suite: bdevio tests on: nvme2n1 00:19:13.903 Test: blockdev write read block ...passed 00:19:13.903 Test: blockdev write zeroes read block ...passed 00:19:13.903 Test: blockdev write zeroes read no split ...passed 00:19:13.903 Test: blockdev write zeroes read split ...passed 00:19:13.903 Test: blockdev write zeroes read split partial ...passed 00:19:13.903 Test: blockdev reset ...passed 00:19:13.903 Test: blockdev write read 8 blocks ...passed 00:19:13.903 Test: blockdev write read size > 128k ...passed 00:19:13.903 Test: blockdev write read invalid size ...passed 00:19:13.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.903 Test: blockdev write read max offset ...passed 00:19:13.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:13.903 Test: blockdev writev readv 8 blocks ...passed 00:19:13.903 Test: blockdev writev readv 30 x 1block ...passed 00:19:13.903 Test: blockdev writev readv block ...passed 00:19:13.903 Test: blockdev writev readv size > 128k ...passed 00:19:13.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:13.903 Test: blockdev comparev and writev ...passed 00:19:13.903 Test: blockdev nvme passthru rw ...passed 00:19:13.903 Test: blockdev nvme passthru vendor specific ...passed 00:19:13.903 Test: blockdev nvme admin passthru ...passed 00:19:13.903 Test: blockdev copy ...passed 00:19:13.903 Suite: bdevio tests on: nvme1n1 00:19:13.903 Test: blockdev write read block ...passed 00:19:13.903 Test: blockdev write zeroes read block ...passed 00:19:13.903 Test: blockdev write zeroes read no split ...passed 00:19:14.162 Test: blockdev write zeroes read split ...passed 00:19:14.162 Test: blockdev write zeroes read split partial ...passed 00:19:14.162 Test: blockdev reset ...passed 00:19:14.162 Test: blockdev write read 8 blocks ...passed 00:19:14.162 Test: blockdev write read size > 128k ...passed 00:19:14.162 Test: blockdev write read invalid size ...passed 00:19:14.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.162 Test: blockdev write read max offset ...passed 00:19:14.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.162 Test: blockdev writev readv 8 blocks ...passed 00:19:14.162 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.162 Test: blockdev writev readv block ...passed 00:19:14.162 Test: blockdev writev readv size > 128k ...passed 00:19:14.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.162 Test: blockdev comparev and writev ...passed 00:19:14.162 Test: blockdev nvme passthru rw ...passed 00:19:14.162 Test: blockdev nvme passthru vendor specific ...passed 00:19:14.162 Test: blockdev nvme admin passthru ...passed 00:19:14.162 Test: blockdev copy ...passed 00:19:14.162 Suite: bdevio tests on: nvme0n3 00:19:14.162 Test: blockdev write read block ...passed 00:19:14.162 Test: blockdev write zeroes read block ...passed 00:19:14.162 Test: blockdev write zeroes read no split ...passed 00:19:14.162 Test: blockdev write zeroes read split ...passed 00:19:14.162 Test: blockdev write zeroes read split partial ...passed 00:19:14.162 Test: blockdev reset ...passed 00:19:14.162 Test: blockdev write read 8 blocks ...passed 00:19:14.162 Test: blockdev write read size > 128k ...passed 00:19:14.162 Test: blockdev write read invalid size ...passed 00:19:14.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.162 Test: blockdev write read max offset ...passed 00:19:14.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.162 Test: blockdev writev readv 8 blocks ...passed 00:19:14.162 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.162 Test: blockdev writev readv block ...passed 00:19:14.162 Test: blockdev writev readv size > 128k ...passed 00:19:14.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.162 Test: blockdev comparev and writev ...passed 00:19:14.162 Test: blockdev nvme passthru rw ...passed 00:19:14.162 Test: blockdev nvme passthru vendor specific ...passed 00:19:14.162 Test: blockdev nvme admin passthru ...passed 00:19:14.162 Test: blockdev copy ...passed 00:19:14.162 Suite: bdevio tests on: nvme0n2 00:19:14.162 Test: blockdev write read block ...passed 00:19:14.162 Test: blockdev write zeroes read block ...passed 00:19:14.162 Test: blockdev write zeroes read no split ...passed 00:19:14.162 Test: blockdev write zeroes read split ...passed 00:19:14.162 Test: blockdev write zeroes read split partial ...passed 00:19:14.162 Test: blockdev reset ...passed 00:19:14.162 Test: blockdev write read 8 blocks ...passed 00:19:14.162 Test: blockdev write read size > 128k ...passed 00:19:14.162 Test: blockdev write read invalid size ...passed 00:19:14.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.162 Test: blockdev write read max offset ...passed 00:19:14.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.162 Test: blockdev writev readv 8 blocks ...passed 00:19:14.162 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.162 Test: blockdev writev readv block ...passed 00:19:14.162 Test: blockdev writev readv size > 128k ...passed 00:19:14.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.162 Test: blockdev comparev and writev ...passed 00:19:14.162 Test: blockdev nvme passthru rw ...passed 00:19:14.162 Test: blockdev nvme passthru vendor specific ...passed 00:19:14.162 Test: blockdev nvme admin passthru ...passed 00:19:14.162 Test: blockdev copy ...passed 00:19:14.162 Suite: bdevio tests on: nvme0n1 00:19:14.162 Test: blockdev write read block ...passed 00:19:14.162 Test: blockdev write zeroes read block ...passed 00:19:14.162 Test: blockdev write zeroes read no split ...passed 00:19:14.422 Test: blockdev write zeroes read split ...passed 00:19:14.422 Test: blockdev write zeroes read split partial ...passed 00:19:14.422 Test: blockdev reset ...passed 00:19:14.422 Test: blockdev write read 8 blocks ...passed 00:19:14.422 Test: blockdev write read size > 128k ...passed 00:19:14.422 Test: blockdev write read invalid size ...passed 00:19:14.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:14.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:14.422 Test: blockdev write read max offset ...passed 00:19:14.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.422 Test: blockdev writev readv 8 blocks ...passed 00:19:14.422 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.422 Test: blockdev writev readv block ...passed 00:19:14.422 Test: blockdev writev readv size > 128k ...passed 00:19:14.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.422 Test: blockdev comparev and writev ...passed 00:19:14.422 Test: blockdev nvme passthru rw ...passed 00:19:14.422 Test: blockdev nvme passthru vendor specific ...passed 00:19:14.422 Test: blockdev nvme admin passthru ...passed 00:19:14.422 Test: blockdev copy ...passed 00:19:14.422 00:19:14.422 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.422 suites 6 6 n/a 0 0 00:19:14.422 tests 138 138 138 0 0 00:19:14.422 asserts 780 780 780 0 n/a 00:19:14.422 00:19:14.422 Elapsed time = 1.549 seconds 00:19:14.422 0 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75194 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 75194 ']' 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 75194 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75194 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75194' 00:19:14.422 killing process with pid 75194 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 75194 00:19:14.422 14:24:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 75194 00:19:15.802 14:24:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:15.802 ************************************ 00:19:15.802 END TEST bdev_bounds 00:19:15.802 ************************************ 00:19:15.802 00:19:15.802 real 0m2.897s 00:19:15.802 user 0m7.025s 00:19:15.802 sys 0m0.520s 00:19:15.802 14:24:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.802 14:24:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:15.802 14:24:40 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:15.802 14:24:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:15.802 14:24:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.802 14:24:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.802 ************************************ 00:19:15.802 START TEST bdev_nbd 00:19:15.802 ************************************ 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75254 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:15.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75254 /var/tmp/spdk-nbd.sock 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75254 ']' 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.802 14:24:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:15.802 [2024-12-10 14:24:40.551882] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:15.802 [2024-12-10 14:24:40.551994] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.061 [2024-12-10 14:24:40.733605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.061 [2024-12-10 14:24:40.867849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:16.630 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.890 1+0 records in 00:19:16.890 1+0 records out 00:19:16.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509498 s, 8.0 MB/s 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:16.890 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:17.149 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.150 1+0 records in 00:19:17.150 1+0 records out 00:19:17.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737653 s, 5.6 MB/s 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:17.150 14:24:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.409 1+0 records in 00:19:17.409 1+0 records out 00:19:17.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00221617 s, 1.8 MB/s 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:17.409 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.669 1+0 records in 00:19:17.669 1+0 records out 00:19:17.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807192 s, 5.1 MB/s 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:17.669 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.928 1+0 records in 00:19:17.928 1+0 records out 00:19:17.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746029 s, 5.5 MB/s 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:17.928 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:17.929 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.190 1+0 records in 00:19:18.190 1+0 records out 00:19:18.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885418 s, 4.6 MB/s 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:18.190 14:24:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:18.471 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:18.471 { 00:19:18.471 "nbd_device": "/dev/nbd0", 00:19:18.471 "bdev_name": "nvme0n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd1", 00:19:18.472 "bdev_name": "nvme0n2" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd2", 00:19:18.472 "bdev_name": "nvme0n3" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd3", 00:19:18.472 "bdev_name": "nvme1n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd4", 00:19:18.472 "bdev_name": "nvme2n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd5", 00:19:18.472 "bdev_name": "nvme3n1" 00:19:18.472 } 00:19:18.472 ]' 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd0", 00:19:18.472 "bdev_name": "nvme0n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd1", 00:19:18.472 "bdev_name": "nvme0n2" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd2", 00:19:18.472 "bdev_name": "nvme0n3" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd3", 00:19:18.472 "bdev_name": "nvme1n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd4", 00:19:18.472 "bdev_name": "nvme2n1" 00:19:18.472 }, 00:19:18.472 { 00:19:18.472 "nbd_device": "/dev/nbd5", 00:19:18.472 "bdev_name": "nvme3n1" 00:19:18.472 } 00:19:18.472 ]' 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:18.472 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:18.771 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.031 14:24:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.290 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:19.549 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.550 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.550 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.550 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:19.809 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:20.068 14:24:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:20.328 /dev/nbd0 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.328 1+0 records in 00:19:20.328 1+0 records out 00:19:20.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715444 s, 5.7 MB/s 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:20.328 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:20.587 /dev/nbd1 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.587 1+0 records in 00:19:20.587 1+0 records out 00:19:20.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862459 s, 4.7 MB/s 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:20.587 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:20.846 /dev/nbd10 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.846 1+0 records in 00:19:20.846 1+0 records out 00:19:20.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0023152 s, 1.8 MB/s 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:20.846 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:21.105 /dev/nbd11 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.105 1+0 records in 00:19:21.105 1+0 records out 00:19:21.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00312333 s, 1.3 MB/s 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:21.105 14:24:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:21.365 /dev/nbd12 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.365 1+0 records in 00:19:21.365 1+0 records out 00:19:21.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796315 s, 5.1 MB/s 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:21.365 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:21.624 /dev/nbd13 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.624 1+0 records in 00:19:21.624 1+0 records out 00:19:21.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735063 s, 5.6 MB/s 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.624 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd0", 00:19:21.884 "bdev_name": "nvme0n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd1", 00:19:21.884 "bdev_name": "nvme0n2" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd10", 00:19:21.884 "bdev_name": "nvme0n3" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd11", 00:19:21.884 "bdev_name": "nvme1n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd12", 00:19:21.884 "bdev_name": "nvme2n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd13", 00:19:21.884 "bdev_name": "nvme3n1" 00:19:21.884 } 00:19:21.884 ]' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd0", 00:19:21.884 "bdev_name": "nvme0n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd1", 00:19:21.884 "bdev_name": "nvme0n2" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd10", 00:19:21.884 "bdev_name": "nvme0n3" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd11", 00:19:21.884 "bdev_name": "nvme1n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd12", 00:19:21.884 "bdev_name": "nvme2n1" 00:19:21.884 }, 00:19:21.884 { 00:19:21.884 "nbd_device": "/dev/nbd13", 00:19:21.884 "bdev_name": "nvme3n1" 00:19:21.884 } 00:19:21.884 ]' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:21.884 /dev/nbd1 00:19:21.884 /dev/nbd10 00:19:21.884 /dev/nbd11 00:19:21.884 /dev/nbd12 00:19:21.884 /dev/nbd13' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:21.884 /dev/nbd1 00:19:21.884 /dev/nbd10 00:19:21.884 /dev/nbd11 00:19:21.884 /dev/nbd12 00:19:21.884 /dev/nbd13' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:21.884 256+0 records in 00:19:21.884 256+0 records out 00:19:21.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113712 s, 92.2 MB/s 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:21.884 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:22.143 256+0 records in 00:19:22.143 256+0 records out 00:19:22.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127019 s, 8.3 MB/s 00:19:22.143 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:22.143 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:22.143 256+0 records in 00:19:22.143 256+0 records out 00:19:22.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127832 s, 8.2 MB/s 00:19:22.143 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:22.143 14:24:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:22.402 256+0 records in 00:19:22.402 256+0 records out 00:19:22.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12654 s, 8.3 MB/s 00:19:22.402 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:22.402 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:22.402 256+0 records in 00:19:22.402 256+0 records out 00:19:22.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154811 s, 6.8 MB/s 00:19:22.661 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:22.661 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:22.661 256+0 records in 00:19:22.661 256+0 records out 00:19:22.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128111 s, 8.2 MB/s 00:19:22.661 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:22.661 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:22.921 256+0 records in 00:19:22.921 256+0 records out 00:19:22.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129275 s, 8.1 MB/s 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:22.921 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.180 14:24:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:23.180 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:23.439 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.440 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.699 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.958 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.216 14:24:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:24.475 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:24.734 malloc_lvol_verify 00:19:24.734 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:24.994 aded0845-b2db-4069-affb-863799320882 00:19:24.994 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:25.253 81fe24e2-7f23-4bad-8755-7a19102acf94 00:19:25.253 14:24:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:25.253 /dev/nbd0 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:25.253 mke2fs 1.47.0 (5-Feb-2023) 00:19:25.253 Discarding device blocks: 0/4096 done 00:19:25.253 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:25.253 00:19:25.253 Allocating group tables: 0/1 done 00:19:25.253 Writing inode tables: 0/1 done 00:19:25.253 Creating journal (1024 blocks): done 00:19:25.253 Writing superblocks and filesystem accounting information: 0/1 done 00:19:25.253 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.253 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:25.512 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.512 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.512 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.512 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75254 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75254 ']' 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75254 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75254 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.513 killing process with pid 75254 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75254' 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75254 00:19:25.513 14:24:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75254 00:19:26.892 14:24:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:26.892 00:19:26.892 real 0m11.142s 00:19:26.892 user 0m14.023s 00:19:26.892 sys 0m4.987s 00:19:26.892 14:24:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.892 14:24:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:26.892 ************************************ 00:19:26.892 END TEST bdev_nbd 00:19:26.892 ************************************ 00:19:26.892 14:24:51 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:26.892 14:24:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:26.892 14:24:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:26.892 14:24:51 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:26.892 14:24:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.892 14:24:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.892 14:24:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:26.892 ************************************ 00:19:26.892 START TEST bdev_fio 00:19:26.892 ************************************ 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:26.892 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:26.892 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:27.152 ************************************ 00:19:27.152 START TEST bdev_fio_rw_verify 00:19:27.152 ************************************ 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.152 14:24:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:27.412 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:27.412 fio-3.35 00:19:27.412 Starting 6 threads 00:19:39.621 00:19:39.621 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75668: Tue Dec 10 14:25:03 2024 00:19:39.621 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(1316MiB/10001msec) 00:19:39.621 slat (usec): min=2, max=1180, avg= 8.53, stdev= 7.08 00:19:39.621 clat (usec): min=92, max=5380, avg=521.83, stdev=247.68 00:19:39.621 lat (usec): min=96, max=5387, avg=530.36, stdev=249.12 00:19:39.621 clat percentiles (usec): 00:19:39.621 | 50.000th=[ 498], 99.000th=[ 1221], 99.900th=[ 1860], 99.990th=[ 3720], 00:19:39.621 | 99.999th=[ 5342] 00:19:39.621 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(1332MiB/10001msec); 0 zone resets 00:19:39.621 slat (usec): min=10, max=2799, avg=27.77, stdev=37.41 00:19:39.621 clat (usec): min=80, max=4784, avg=632.95, stdev=274.54 00:19:39.621 lat (usec): min=96, max=4803, avg=660.71, stdev=280.87 00:19:39.621 clat percentiles (usec): 00:19:39.621 | 50.000th=[ 603], 99.000th=[ 1467], 99.900th=[ 1975], 99.990th=[ 2900], 00:19:39.621 | 99.999th=[ 4752] 00:19:39.621 bw ( KiB/s): min=100581, max=166840, per=99.85%, avg=136215.68, stdev=3005.01, samples=114 00:19:39.621 iops : min=25145, max=41710, avg=34053.58, stdev=751.26, samples=114 00:19:39.621 lat (usec) : 100=0.01%, 250=8.57%, 500=33.58%, 750=35.35%, 1000=16.26% 00:19:39.621 lat (msec) : 2=6.16%, 4=0.08%, 10=0.01% 00:19:39.621 cpu : usr=53.61%, sys=30.76%, ctx=7966, majf=0, minf=27966 00:19:39.621 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.621 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.621 issued rwts: total=336901,341074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:39.621 00:19:39.621 Run status group 0 (all jobs): 00:19:39.621 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1316MiB (1380MB), run=10001-10001msec 00:19:39.621 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=1332MiB (1397MB), run=10001-10001msec 00:19:39.881 ----------------------------------------------------- 00:19:39.881 Suppressions used: 00:19:39.881 count bytes template 00:19:39.882 6 48 /usr/src/fio/parse.c 00:19:39.882 3909 375264 /usr/src/fio/iolog.c 00:19:39.882 1 8 libtcmalloc_minimal.so 00:19:39.882 1 904 libcrypto.so 00:19:39.882 ----------------------------------------------------- 00:19:39.882 00:19:39.882 00:19:39.882 real 0m12.737s 00:19:39.882 user 0m34.203s 00:19:39.882 sys 0m18.989s 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 ************************************ 00:19:39.882 END TEST bdev_fio_rw_verify 00:19:39.882 ************************************ 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "deb8ae91-3121-4ed9-8153-dd02a63b4fbb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "deb8ae91-3121-4ed9-8153-dd02a63b4fbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7b577269-523c-4884-9500-4601627aa796"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b577269-523c-4884-9500-4601627aa796",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c688632a-f6b7-4673-b1f4-11378fb999ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c688632a-f6b7-4673-b1f4-11378fb999ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "73233923-a9a8-4e13-9d7b-ef75a9532ecd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73233923-a9a8-4e13-9d7b-ef75a9532ecd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b3a06199-08ee-4574-aafd-44d34ae77e4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b3a06199-08ee-4574-aafd-44d34ae77e4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b1c81d08-883a-4c24-8f7a-1898de7e29b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b1c81d08-883a-4c24-8f7a-1898de7e29b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.882 /home/vagrant/spdk_repo/spdk 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:39.882 00:19:39.882 real 0m12.987s 00:19:39.882 user 0m34.325s 00:19:39.882 sys 0m19.122s 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.882 14:25:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 ************************************ 00:19:39.882 END TEST bdev_fio 00:19:39.882 ************************************ 00:19:39.882 14:25:04 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:39.882 14:25:04 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:39.882 14:25:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:39.882 14:25:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.882 14:25:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.142 ************************************ 00:19:40.142 START TEST bdev_verify 00:19:40.142 ************************************ 00:19:40.142 14:25:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:40.142 [2024-12-10 14:25:04.837171] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:40.142 [2024-12-10 14:25:04.837302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:19:40.399 [2024-12-10 14:25:05.022556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:40.400 [2024-12-10 14:25:05.152524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.400 [2024-12-10 14:25:05.152553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.967 Running I/O for 5 seconds... 00:19:43.283 22976.00 IOPS, 89.75 MiB/s [2024-12-10T14:25:09.053Z] 23904.00 IOPS, 93.38 MiB/s [2024-12-10T14:25:09.990Z] 23040.00 IOPS, 90.00 MiB/s [2024-12-10T14:25:10.926Z] 22152.00 IOPS, 86.53 MiB/s [2024-12-10T14:25:10.926Z] 21612.80 IOPS, 84.42 MiB/s 00:19:46.092 Latency(us) 00:19:46.092 [2024-12-10T14:25:10.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.092 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x0 length 0x80000 00:19:46.092 nvme0n1 : 5.03 1831.14 7.15 0.00 0.00 69786.65 10422.59 74958.44 00:19:46.092 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x80000 length 0x80000 00:19:46.092 nvme0n1 : 5.07 1463.22 5.72 0.00 0.00 87321.15 9580.36 118754.39 00:19:46.092 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x0 length 0x80000 00:19:46.092 nvme0n2 : 5.05 1825.73 7.13 0.00 0.00 69869.84 11212.18 70326.18 00:19:46.092 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x80000 length 0x80000 00:19:46.092 nvme0n2 : 5.05 1445.69 5.65 0.00 0.00 88238.57 10106.76 108647.63 00:19:46.092 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x0 length 0x80000 00:19:46.092 nvme0n3 : 5.06 1821.30 7.11 0.00 0.00 69916.21 7474.79 72431.76 00:19:46.092 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x80000 length 0x80000 00:19:46.092 nvme0n3 : 5.06 1440.68 5.63 0.00 0.00 88404.90 10896.35 101488.68 00:19:46.092 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0x0 length 0xbd0bd 00:19:46.092 nvme1n1 : 5.07 2661.52 10.40 0.00 0.00 47714.10 7211.59 66957.26 00:19:46.092 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.092 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:46.092 nvme1n1 : 5.08 2311.28 9.03 0.00 0.00 54853.64 6843.12 84644.09 00:19:46.092 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.093 Verification LBA range: start 0x0 length 0xa0000 00:19:46.093 nvme2n1 : 5.07 1843.38 7.20 0.00 0.00 68708.40 8211.74 76642.90 00:19:46.093 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.093 Verification LBA range: start 0xa0000 length 0xa0000 00:19:46.093 nvme2n1 : 5.09 1458.78 5.70 0.00 0.00 86863.08 5474.49 100646.45 00:19:46.093 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:46.093 Verification LBA range: start 0x0 length 0x20000 00:19:46.093 nvme3n1 : 5.06 1820.27 7.11 0.00 0.00 69438.13 9475.08 73695.10 00:19:46.093 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:46.093 Verification LBA range: start 0x20000 length 0x20000 00:19:46.093 nvme3n1 : 5.07 1440.13 5.63 0.00 0.00 87804.70 5158.66 116227.70 00:19:46.093 [2024-12-10T14:25:10.927Z] =================================================================================================================== 00:19:46.093 [2024-12-10T14:25:10.927Z] Total : 21363.11 83.45 0.00 0.00 71395.90 5158.66 118754.39 00:19:47.534 00:19:47.534 real 0m7.336s 00:19:47.534 user 0m11.231s 00:19:47.534 sys 0m2.035s 00:19:47.534 14:25:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.534 14:25:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:47.534 ************************************ 00:19:47.534 END TEST bdev_verify 00:19:47.534 ************************************ 00:19:47.534 14:25:12 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:47.534 14:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:47.534 14:25:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.534 14:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:47.534 ************************************ 00:19:47.534 START TEST bdev_verify_big_io 00:19:47.534 ************************************ 00:19:47.534 14:25:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:47.534 [2024-12-10 14:25:12.237481] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:47.534 [2024-12-10 14:25:12.238130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75945 ] 00:19:47.793 [2024-12-10 14:25:12.417564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:47.793 [2024-12-10 14:25:12.555446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.793 [2024-12-10 14:25:12.555475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.731 Running I/O for 5 seconds... 00:19:53.171 1428.00 IOPS, 89.25 MiB/s [2024-12-10T14:25:18.941Z] 2875.00 IOPS, 179.69 MiB/s [2024-12-10T14:25:19.508Z] 3318.33 IOPS, 207.40 MiB/s 00:19:54.674 Latency(us) 00:19:54.674 [2024-12-10T14:25:19.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.674 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0x8000 00:19:54.674 nvme0n1 : 5.54 208.08 13.00 0.00 0.00 601258.44 54323.82 576085.13 00:19:54.674 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x8000 length 0x8000 00:19:54.674 nvme0n1 : 5.74 133.85 8.37 0.00 0.00 917575.92 5474.49 1293664.85 00:19:54.674 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0x8000 00:19:54.674 nvme0n2 : 5.54 181.99 11.37 0.00 0.00 676290.57 5079.70 1549702.68 00:19:54.674 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x8000 length 0x8000 00:19:54.674 nvme0n2 : 5.74 86.42 5.40 0.00 0.00 1342779.07 133072.30 2021351.33 00:19:54.674 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0x8000 00:19:54.674 nvme0n3 : 5.48 205.73 12.86 0.00 0.00 590306.53 66957.26 667045.94 00:19:54.674 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x8000 length 0x8000 00:19:54.674 nvme0n3 : 5.75 103.04 6.44 0.00 0.00 1090701.00 89697.47 1900070.25 00:19:54.674 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0xbd0b 00:19:54.674 nvme1n1 : 5.55 216.39 13.52 0.00 0.00 552333.02 35794.76 1105005.39 00:19:54.674 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:54.674 nvme1n1 : 5.84 126.00 7.87 0.00 0.00 863296.23 3921.63 2075254.03 00:19:54.674 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0xa000 00:19:54.674 nvme2n1 : 5.77 221.83 13.86 0.00 0.00 531202.58 228.65 501968.91 00:19:54.674 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0xa000 length 0xa000 00:19:54.674 nvme2n1 : 6.07 116.15 7.26 0.00 0.00 901864.39 888.29 1664245.92 00:19:54.674 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x0 length 0x2000 00:19:54.674 nvme3n1 : 5.64 249.75 15.61 0.00 0.00 458375.69 8527.58 528920.26 00:19:54.674 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:54.674 Verification LBA range: start 0x2000 length 0x2000 00:19:54.674 nvme3n1 : 6.15 297.92 18.62 0.00 0.00 344817.53 786.30 1374518.90 00:19:54.674 [2024-12-10T14:25:19.508Z] =================================================================================================================== 00:19:54.674 [2024-12-10T14:25:19.509Z] Total : 2147.14 134.20 0.00 0.00 645889.16 228.65 2075254.03 00:19:56.578 00:19:56.578 real 0m8.757s 00:19:56.578 user 0m15.848s 00:19:56.578 sys 0m0.661s 00:19:56.578 14:25:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.578 14:25:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.578 ************************************ 00:19:56.578 END TEST bdev_verify_big_io 00:19:56.578 ************************************ 00:19:56.578 14:25:20 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:56.578 14:25:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:56.578 14:25:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.578 14:25:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:56.578 ************************************ 00:19:56.578 START TEST bdev_write_zeroes 00:19:56.578 ************************************ 00:19:56.578 14:25:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:56.578 [2024-12-10 14:25:21.091194] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:56.578 [2024-12-10 14:25:21.091322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76061 ] 00:19:56.578 [2024-12-10 14:25:21.281082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.837 [2024-12-10 14:25:21.412051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.405 Running I/O for 1 seconds... 00:19:58.343 58784.00 IOPS, 229.62 MiB/s 00:19:58.343 Latency(us) 00:19:58.343 [2024-12-10T14:25:23.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.343 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme0n1 : 1.02 9447.05 36.90 0.00 0.00 13537.01 7001.03 29478.04 00:19:58.343 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme0n2 : 1.02 9437.50 36.87 0.00 0.00 13542.01 7106.31 29267.48 00:19:58.343 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme0n3 : 1.02 9428.64 36.83 0.00 0.00 13544.45 7106.31 29267.48 00:19:58.343 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme1n1 : 1.01 12013.03 46.93 0.00 0.00 10620.62 4684.90 22003.25 00:19:58.343 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme2n1 : 1.02 9419.65 36.80 0.00 0.00 13477.86 4026.91 27793.58 00:19:58.343 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:58.343 nvme3n1 : 1.02 9410.81 36.76 0.00 0.00 13474.13 3395.24 27793.58 00:19:58.343 [2024-12-10T14:25:23.177Z] =================================================================================================================== 00:19:58.343 [2024-12-10T14:25:23.177Z] Total : 59156.69 231.08 0.00 0.00 12928.81 3395.24 29478.04 00:19:59.723 00:19:59.723 real 0m3.200s 00:19:59.723 user 0m2.316s 00:19:59.723 sys 0m0.689s 00:19:59.723 14:25:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.723 14:25:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 ************************************ 00:19:59.723 END TEST bdev_write_zeroes 00:19:59.723 ************************************ 00:19:59.723 14:25:24 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.723 14:25:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:59.723 14:25:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.723 14:25:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 ************************************ 00:19:59.723 START TEST bdev_json_nonenclosed 00:19:59.723 ************************************ 00:19:59.723 14:25:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.723 [2024-12-10 14:25:24.355409] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:19:59.723 [2024-12-10 14:25:24.355517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76122 ] 00:19:59.723 [2024-12-10 14:25:24.535012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.982 [2024-12-10 14:25:24.672090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.982 [2024-12-10 14:25:24.672214] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:59.982 [2024-12-10 14:25:24.672239] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:59.982 [2024-12-10 14:25:24.672253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:00.241 00:20:00.241 real 0m0.677s 00:20:00.241 user 0m0.422s 00:20:00.241 sys 0m0.151s 00:20:00.241 14:25:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.241 14:25:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 ************************************ 00:20:00.241 END TEST bdev_json_nonenclosed 00:20:00.241 ************************************ 00:20:00.242 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:00.242 14:25:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:00.242 14:25:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.242 14:25:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.242 ************************************ 00:20:00.242 START TEST bdev_json_nonarray 00:20:00.242 ************************************ 00:20:00.242 14:25:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:00.500 [2024-12-10 14:25:25.117901] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:00.500 [2024-12-10 14:25:25.118027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76153 ] 00:20:00.500 [2024-12-10 14:25:25.303485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.758 [2024-12-10 14:25:25.432973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.758 [2024-12-10 14:25:25.433109] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:00.758 [2024-12-10 14:25:25.433136] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:00.758 [2024-12-10 14:25:25.433149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:01.017 00:20:01.017 real 0m0.684s 00:20:01.017 user 0m0.416s 00:20:01.017 sys 0m0.163s 00:20:01.018 14:25:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.018 14:25:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:01.018 ************************************ 00:20:01.018 END TEST bdev_json_nonarray 00:20:01.018 ************************************ 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:01.018 14:25:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:01.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.333 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.268 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:04.268 00:20:04.268 real 0m59.754s 00:20:04.268 user 1m34.523s 00:20:04.268 sys 0m36.449s 00:20:04.268 14:25:28 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.268 14:25:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:04.268 ************************************ 00:20:04.268 END TEST blockdev_xnvme 00:20:04.268 ************************************ 00:20:04.268 14:25:29 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:04.268 14:25:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.268 14:25:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.268 14:25:29 -- common/autotest_common.sh@10 -- # set +x 00:20:04.268 ************************************ 00:20:04.268 START TEST ublk 00:20:04.268 ************************************ 00:20:04.268 14:25:29 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:04.526 * Looking for test storage... 00:20:04.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:04.526 14:25:29 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.526 14:25:29 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.526 14:25:29 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.526 14:25:29 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.526 14:25:29 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.526 14:25:29 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.526 14:25:29 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.526 14:25:29 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.526 14:25:29 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.526 14:25:29 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.526 14:25:29 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.526 14:25:29 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.526 14:25:29 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.526 14:25:29 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.526 14:25:29 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.526 14:25:29 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:04.527 14:25:29 ublk -- scripts/common.sh@345 -- # : 1 00:20:04.527 14:25:29 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.527 14:25:29 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.527 14:25:29 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:04.527 14:25:29 ublk -- scripts/common.sh@353 -- # local d=1 00:20:04.527 14:25:29 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.527 14:25:29 ublk -- scripts/common.sh@355 -- # echo 1 00:20:04.527 14:25:29 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.527 14:25:29 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:04.527 14:25:29 ublk -- scripts/common.sh@353 -- # local d=2 00:20:04.527 14:25:29 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.527 14:25:29 ublk -- scripts/common.sh@355 -- # echo 2 00:20:04.527 14:25:29 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.527 14:25:29 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.527 14:25:29 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.527 14:25:29 ublk -- scripts/common.sh@368 -- # return 0 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.527 --rc genhtml_branch_coverage=1 00:20:04.527 --rc genhtml_function_coverage=1 00:20:04.527 --rc genhtml_legend=1 00:20:04.527 --rc geninfo_all_blocks=1 00:20:04.527 --rc geninfo_unexecuted_blocks=1 00:20:04.527 00:20:04.527 ' 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.527 --rc genhtml_branch_coverage=1 00:20:04.527 --rc genhtml_function_coverage=1 00:20:04.527 --rc genhtml_legend=1 00:20:04.527 --rc geninfo_all_blocks=1 00:20:04.527 --rc geninfo_unexecuted_blocks=1 00:20:04.527 00:20:04.527 ' 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.527 --rc genhtml_branch_coverage=1 00:20:04.527 --rc genhtml_function_coverage=1 00:20:04.527 --rc genhtml_legend=1 00:20:04.527 --rc geninfo_all_blocks=1 00:20:04.527 --rc geninfo_unexecuted_blocks=1 00:20:04.527 00:20:04.527 ' 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.527 --rc genhtml_branch_coverage=1 00:20:04.527 --rc genhtml_function_coverage=1 00:20:04.527 --rc genhtml_legend=1 00:20:04.527 --rc geninfo_all_blocks=1 00:20:04.527 --rc geninfo_unexecuted_blocks=1 00:20:04.527 00:20:04.527 ' 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:04.527 14:25:29 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:04.527 14:25:29 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:04.527 14:25:29 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:04.527 14:25:29 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:04.527 14:25:29 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:04.527 14:25:29 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:04.527 14:25:29 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:04.527 14:25:29 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:04.527 14:25:29 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.527 14:25:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:04.527 ************************************ 00:20:04.527 START TEST test_save_ublk_config 00:20:04.527 ************************************ 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76457 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76457 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76457 ']' 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.527 14:25:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:04.785 [2024-12-10 14:25:29.427814] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:04.785 [2024-12-10 14:25:29.428019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76457 ] 00:20:04.785 [2024-12-10 14:25:29.613311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.044 [2024-12-10 14:25:29.741733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.980 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:05.980 [2024-12-10 14:25:30.801700] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:05.980 [2024-12-10 14:25:30.802896] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:06.240 malloc0 00:20:06.240 [2024-12-10 14:25:30.899104] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:06.240 [2024-12-10 14:25:30.899205] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:06.240 [2024-12-10 14:25:30.899219] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:06.240 [2024-12-10 14:25:30.899228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:06.240 [2024-12-10 14:25:30.906728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:06.240 [2024-12-10 14:25:30.906751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:06.240 [2024-12-10 14:25:30.914732] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:06.240 [2024-12-10 14:25:30.914852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:06.240 [2024-12-10 14:25:30.938725] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:06.240 0 00:20:06.240 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.240 14:25:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:06.240 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.240 14:25:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.499 14:25:31 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:06.499 "subsystems": [ 00:20:06.499 { 00:20:06.499 "subsystem": "fsdev", 00:20:06.499 "config": [ 00:20:06.499 { 00:20:06.499 "method": "fsdev_set_opts", 00:20:06.499 "params": { 00:20:06.499 "fsdev_io_pool_size": 65535, 00:20:06.499 "fsdev_io_cache_size": 256 00:20:06.499 } 00:20:06.499 } 00:20:06.499 ] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "keyring", 00:20:06.499 "config": [] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "iobuf", 00:20:06.499 "config": [ 00:20:06.499 { 00:20:06.499 "method": "iobuf_set_options", 00:20:06.499 "params": { 00:20:06.499 "small_pool_count": 8192, 00:20:06.499 "large_pool_count": 1024, 00:20:06.499 "small_bufsize": 8192, 00:20:06.499 "large_bufsize": 135168, 00:20:06.499 "enable_numa": false 00:20:06.499 } 00:20:06.499 } 00:20:06.499 ] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "sock", 00:20:06.499 "config": [ 00:20:06.499 { 00:20:06.499 "method": "sock_set_default_impl", 00:20:06.499 "params": { 00:20:06.499 "impl_name": "posix" 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "sock_impl_set_options", 00:20:06.499 "params": { 00:20:06.499 "impl_name": "ssl", 00:20:06.499 "recv_buf_size": 4096, 00:20:06.499 "send_buf_size": 4096, 00:20:06.499 "enable_recv_pipe": true, 00:20:06.499 "enable_quickack": false, 00:20:06.499 "enable_placement_id": 0, 00:20:06.499 "enable_zerocopy_send_server": true, 00:20:06.499 "enable_zerocopy_send_client": false, 00:20:06.499 "zerocopy_threshold": 0, 00:20:06.499 "tls_version": 0, 00:20:06.499 "enable_ktls": false 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "sock_impl_set_options", 00:20:06.499 "params": { 00:20:06.499 "impl_name": "posix", 00:20:06.499 "recv_buf_size": 2097152, 00:20:06.499 "send_buf_size": 2097152, 00:20:06.499 "enable_recv_pipe": true, 00:20:06.499 "enable_quickack": false, 00:20:06.499 "enable_placement_id": 0, 00:20:06.499 "enable_zerocopy_send_server": true, 00:20:06.499 "enable_zerocopy_send_client": false, 00:20:06.499 "zerocopy_threshold": 0, 00:20:06.499 "tls_version": 0, 00:20:06.499 "enable_ktls": false 00:20:06.499 } 00:20:06.499 } 00:20:06.499 ] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "vmd", 00:20:06.499 "config": [] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "accel", 00:20:06.499 "config": [ 00:20:06.499 { 00:20:06.499 "method": "accel_set_options", 00:20:06.499 "params": { 00:20:06.499 "small_cache_size": 128, 00:20:06.499 "large_cache_size": 16, 00:20:06.499 "task_count": 2048, 00:20:06.499 "sequence_count": 2048, 00:20:06.499 "buf_count": 2048 00:20:06.499 } 00:20:06.499 } 00:20:06.499 ] 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "subsystem": "bdev", 00:20:06.499 "config": [ 00:20:06.499 { 00:20:06.499 "method": "bdev_set_options", 00:20:06.499 "params": { 00:20:06.499 "bdev_io_pool_size": 65535, 00:20:06.499 "bdev_io_cache_size": 256, 00:20:06.499 "bdev_auto_examine": true, 00:20:06.499 "iobuf_small_cache_size": 128, 00:20:06.499 "iobuf_large_cache_size": 16 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "bdev_raid_set_options", 00:20:06.499 "params": { 00:20:06.499 "process_window_size_kb": 1024, 00:20:06.499 "process_max_bandwidth_mb_sec": 0 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "bdev_iscsi_set_options", 00:20:06.499 "params": { 00:20:06.499 "timeout_sec": 30 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "bdev_nvme_set_options", 00:20:06.499 "params": { 00:20:06.499 "action_on_timeout": "none", 00:20:06.499 "timeout_us": 0, 00:20:06.499 "timeout_admin_us": 0, 00:20:06.499 "keep_alive_timeout_ms": 10000, 00:20:06.499 "arbitration_burst": 0, 00:20:06.499 "low_priority_weight": 0, 00:20:06.499 "medium_priority_weight": 0, 00:20:06.499 "high_priority_weight": 0, 00:20:06.499 "nvme_adminq_poll_period_us": 10000, 00:20:06.499 "nvme_ioq_poll_period_us": 0, 00:20:06.499 "io_queue_requests": 0, 00:20:06.499 "delay_cmd_submit": true, 00:20:06.499 "transport_retry_count": 4, 00:20:06.499 "bdev_retry_count": 3, 00:20:06.499 "transport_ack_timeout": 0, 00:20:06.499 "ctrlr_loss_timeout_sec": 0, 00:20:06.499 "reconnect_delay_sec": 0, 00:20:06.499 "fast_io_fail_timeout_sec": 0, 00:20:06.499 "disable_auto_failback": false, 00:20:06.499 "generate_uuids": false, 00:20:06.499 "transport_tos": 0, 00:20:06.499 "nvme_error_stat": false, 00:20:06.499 "rdma_srq_size": 0, 00:20:06.499 "io_path_stat": false, 00:20:06.499 "allow_accel_sequence": false, 00:20:06.499 "rdma_max_cq_size": 0, 00:20:06.499 "rdma_cm_event_timeout_ms": 0, 00:20:06.499 "dhchap_digests": [ 00:20:06.499 "sha256", 00:20:06.499 "sha384", 00:20:06.499 "sha512" 00:20:06.499 ], 00:20:06.499 "dhchap_dhgroups": [ 00:20:06.499 "null", 00:20:06.499 "ffdhe2048", 00:20:06.499 "ffdhe3072", 00:20:06.499 "ffdhe4096", 00:20:06.499 "ffdhe6144", 00:20:06.499 "ffdhe8192" 00:20:06.499 ] 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "bdev_nvme_set_hotplug", 00:20:06.499 "params": { 00:20:06.499 "period_us": 100000, 00:20:06.499 "enable": false 00:20:06.499 } 00:20:06.499 }, 00:20:06.499 { 00:20:06.499 "method": "bdev_malloc_create", 00:20:06.499 "params": { 00:20:06.499 "name": "malloc0", 00:20:06.499 "num_blocks": 8192, 00:20:06.499 "block_size": 4096, 00:20:06.499 "physical_block_size": 4096, 00:20:06.500 "uuid": "85c5e259-6bb8-489b-88a0-f6ef44314669", 00:20:06.500 "optimal_io_boundary": 0, 00:20:06.500 "md_size": 0, 00:20:06.500 "dif_type": 0, 00:20:06.500 "dif_is_head_of_md": false, 00:20:06.500 "dif_pi_format": 0 00:20:06.500 } 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "method": "bdev_wait_for_examine" 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "scsi", 00:20:06.500 "config": null 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "scheduler", 00:20:06.500 "config": [ 00:20:06.500 { 00:20:06.500 "method": "framework_set_scheduler", 00:20:06.500 "params": { 00:20:06.500 "name": "static" 00:20:06.500 } 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "vhost_scsi", 00:20:06.500 "config": [] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "vhost_blk", 00:20:06.500 "config": [] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "ublk", 00:20:06.500 "config": [ 00:20:06.500 { 00:20:06.500 "method": "ublk_create_target", 00:20:06.500 "params": { 00:20:06.500 "cpumask": "1" 00:20:06.500 } 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "method": "ublk_start_disk", 00:20:06.500 "params": { 00:20:06.500 "bdev_name": "malloc0", 00:20:06.500 "ublk_id": 0, 00:20:06.500 "num_queues": 1, 00:20:06.500 "queue_depth": 128 00:20:06.500 } 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "nbd", 00:20:06.500 "config": [] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "nvmf", 00:20:06.500 "config": [ 00:20:06.500 { 00:20:06.500 "method": "nvmf_set_config", 00:20:06.500 "params": { 00:20:06.500 "discovery_filter": "match_any", 00:20:06.500 "admin_cmd_passthru": { 00:20:06.500 "identify_ctrlr": false 00:20:06.500 }, 00:20:06.500 "dhchap_digests": [ 00:20:06.500 "sha256", 00:20:06.500 "sha384", 00:20:06.500 "sha512" 00:20:06.500 ], 00:20:06.500 "dhchap_dhgroups": [ 00:20:06.500 "null", 00:20:06.500 "ffdhe2048", 00:20:06.500 "ffdhe3072", 00:20:06.500 "ffdhe4096", 00:20:06.500 "ffdhe6144", 00:20:06.500 "ffdhe8192" 00:20:06.500 ] 00:20:06.500 } 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "method": "nvmf_set_max_subsystems", 00:20:06.500 "params": { 00:20:06.500 "max_subsystems": 1024 00:20:06.500 } 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "method": "nvmf_set_crdt", 00:20:06.500 "params": { 00:20:06.500 "crdt1": 0, 00:20:06.500 "crdt2": 0, 00:20:06.500 "crdt3": 0 00:20:06.500 } 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 }, 00:20:06.500 { 00:20:06.500 "subsystem": "iscsi", 00:20:06.500 "config": [ 00:20:06.500 { 00:20:06.500 "method": "iscsi_set_options", 00:20:06.500 "params": { 00:20:06.500 "node_base": "iqn.2016-06.io.spdk", 00:20:06.500 "max_sessions": 128, 00:20:06.500 "max_connections_per_session": 2, 00:20:06.500 "max_queue_depth": 64, 00:20:06.500 "default_time2wait": 2, 00:20:06.500 "default_time2retain": 20, 00:20:06.500 "first_burst_length": 8192, 00:20:06.500 "immediate_data": true, 00:20:06.500 "allow_duplicated_isid": false, 00:20:06.500 "error_recovery_level": 0, 00:20:06.500 "nop_timeout": 60, 00:20:06.500 "nop_in_interval": 30, 00:20:06.500 "disable_chap": false, 00:20:06.500 "require_chap": false, 00:20:06.500 "mutual_chap": false, 00:20:06.500 "chap_group": 0, 00:20:06.500 "max_large_datain_per_connection": 64, 00:20:06.500 "max_r2t_per_connection": 4, 00:20:06.500 "pdu_pool_size": 36864, 00:20:06.500 "immediate_data_pool_size": 16384, 00:20:06.500 "data_out_pool_size": 2048 00:20:06.500 } 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 } 00:20:06.500 ] 00:20:06.500 }' 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76457 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76457 ']' 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76457 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76457 00:20:06.500 killing process with pid 76457 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76457' 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76457 00:20:06.500 14:25:31 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76457 00:20:08.406 [2024-12-10 14:25:32.768702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:08.406 [2024-12-10 14:25:32.801726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:08.406 [2024-12-10 14:25:32.801852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:08.406 [2024-12-10 14:25:32.809709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:08.406 [2024-12-10 14:25:32.809766] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:08.406 [2024-12-10 14:25:32.809783] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:08.406 [2024-12-10 14:25:32.809811] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:08.406 [2024-12-10 14:25:32.809967] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76534 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76534 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76534 ']' 00:20:10.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:10.943 14:25:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:10.943 "subsystems": [ 00:20:10.943 { 00:20:10.943 "subsystem": "fsdev", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "fsdev_set_opts", 00:20:10.943 "params": { 00:20:10.943 "fsdev_io_pool_size": 65535, 00:20:10.943 "fsdev_io_cache_size": 256 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "keyring", 00:20:10.943 "config": [] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "iobuf", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "iobuf_set_options", 00:20:10.943 "params": { 00:20:10.943 "small_pool_count": 8192, 00:20:10.943 "large_pool_count": 1024, 00:20:10.943 "small_bufsize": 8192, 00:20:10.943 "large_bufsize": 135168, 00:20:10.943 "enable_numa": false 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "sock", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "sock_set_default_impl", 00:20:10.943 "params": { 00:20:10.943 "impl_name": "posix" 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "sock_impl_set_options", 00:20:10.943 "params": { 00:20:10.943 "impl_name": "ssl", 00:20:10.943 "recv_buf_size": 4096, 00:20:10.943 "send_buf_size": 4096, 00:20:10.943 "enable_recv_pipe": true, 00:20:10.943 "enable_quickack": false, 00:20:10.943 "enable_placement_id": 0, 00:20:10.943 "enable_zerocopy_send_server": true, 00:20:10.943 "enable_zerocopy_send_client": false, 00:20:10.943 "zerocopy_threshold": 0, 00:20:10.943 "tls_version": 0, 00:20:10.943 "enable_ktls": false 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "sock_impl_set_options", 00:20:10.943 "params": { 00:20:10.943 "impl_name": "posix", 00:20:10.943 "recv_buf_size": 2097152, 00:20:10.943 "send_buf_size": 2097152, 00:20:10.943 "enable_recv_pipe": true, 00:20:10.943 "enable_quickack": false, 00:20:10.943 "enable_placement_id": 0, 00:20:10.943 "enable_zerocopy_send_server": true, 00:20:10.943 "enable_zerocopy_send_client": false, 00:20:10.943 "zerocopy_threshold": 0, 00:20:10.943 "tls_version": 0, 00:20:10.943 "enable_ktls": false 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "vmd", 00:20:10.943 "config": [] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "accel", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "accel_set_options", 00:20:10.943 "params": { 00:20:10.943 "small_cache_size": 128, 00:20:10.943 "large_cache_size": 16, 00:20:10.943 "task_count": 2048, 00:20:10.943 "sequence_count": 2048, 00:20:10.943 "buf_count": 2048 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "bdev", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "bdev_set_options", 00:20:10.943 "params": { 00:20:10.943 "bdev_io_pool_size": 65535, 00:20:10.943 "bdev_io_cache_size": 256, 00:20:10.943 "bdev_auto_examine": true, 00:20:10.943 "iobuf_small_cache_size": 128, 00:20:10.943 "iobuf_large_cache_size": 16 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_raid_set_options", 00:20:10.943 "params": { 00:20:10.943 "process_window_size_kb": 1024, 00:20:10.943 "process_max_bandwidth_mb_sec": 0 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_iscsi_set_options", 00:20:10.943 "params": { 00:20:10.943 "timeout_sec": 30 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_nvme_set_options", 00:20:10.943 "params": { 00:20:10.943 "action_on_timeout": "none", 00:20:10.943 "timeout_us": 0, 00:20:10.943 "timeout_admin_us": 0, 00:20:10.943 "keep_alive_timeout_ms": 10000, 00:20:10.943 "arbitration_burst": 0, 00:20:10.943 "low_priority_weight": 0, 00:20:10.943 "medium_priority_weight": 0, 00:20:10.943 "high_priority_weight": 0, 00:20:10.943 "nvme_adminq_poll_period_us": 10000, 00:20:10.943 "nvme_ioq_poll_period_us": 0, 00:20:10.943 "io_queue_requests": 0, 00:20:10.943 "delay_cmd_submit": true, 00:20:10.943 "transport_retry_count": 4, 00:20:10.943 "bdev_retry_count": 3, 00:20:10.943 "transport_ack_timeout": 0, 00:20:10.943 "ctrlr_loss_timeout_sec": 0, 00:20:10.943 "reconnect_delay_sec": 0, 00:20:10.943 "fast_io_fail_timeout_sec": 0, 00:20:10.943 "disable_auto_failback": false, 00:20:10.943 "generate_uuids": false, 00:20:10.943 "transport_tos": 0, 00:20:10.943 "nvme_error_stat": false, 00:20:10.943 "rdma_srq_size": 0, 00:20:10.943 "io_path_stat": false, 00:20:10.943 "allow_accel_sequence": false, 00:20:10.943 "rdma_max_cq_size": 0, 00:20:10.943 "rdma_cm_event_timeout_ms": 0, 00:20:10.943 "dhchap_digests": [ 00:20:10.943 "sha256", 00:20:10.943 "sha384", 00:20:10.943 "sha512" 00:20:10.943 ], 00:20:10.943 "dhchap_dhgroups": [ 00:20:10.943 "null", 00:20:10.943 "ffdhe2048", 00:20:10.943 "ffdhe3072", 00:20:10.943 "ffdhe4096", 00:20:10.943 "ffdhe6144", 00:20:10.943 "ffdhe8192" 00:20:10.943 ] 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_nvme_set_hotplug", 00:20:10.943 "params": { 00:20:10.943 "period_us": 100000, 00:20:10.943 "enable": false 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_malloc_create", 00:20:10.943 "params": { 00:20:10.943 "name": "malloc0", 00:20:10.943 "num_blocks": 8192, 00:20:10.943 "block_size": 4096, 00:20:10.943 "physical_block_size": 4096, 00:20:10.943 "uuid": "85c5e259-6bb8-489b-88a0-f6ef44314669", 00:20:10.943 "optimal_io_boundary": 0, 00:20:10.943 "md_size": 0, 00:20:10.943 "dif_type": 0, 00:20:10.943 "dif_is_head_of_md": false, 00:20:10.943 "dif_pi_format": 0 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "bdev_wait_for_examine" 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "scsi", 00:20:10.943 "config": null 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "scheduler", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "framework_set_scheduler", 00:20:10.943 "params": { 00:20:10.943 "name": "static" 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "vhost_scsi", 00:20:10.943 "config": [] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "vhost_blk", 00:20:10.943 "config": [] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "ublk", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "ublk_create_target", 00:20:10.943 "params": { 00:20:10.943 "cpumask": "1" 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "ublk_start_disk", 00:20:10.943 "params": { 00:20:10.943 "bdev_name": "malloc0", 00:20:10.943 "ublk_id": 0, 00:20:10.943 "num_queues": 1, 00:20:10.943 "queue_depth": 128 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "nbd", 00:20:10.943 "config": [] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "nvmf", 00:20:10.943 "config": [ 00:20:10.943 { 00:20:10.943 "method": "nvmf_set_config", 00:20:10.943 "params": { 00:20:10.943 "discovery_filter": "match_any", 00:20:10.943 "admin_cmd_passthru": { 00:20:10.943 "identify_ctrlr": false 00:20:10.943 }, 00:20:10.943 "dhchap_digests": [ 00:20:10.943 "sha256", 00:20:10.943 "sha384", 00:20:10.943 "sha512" 00:20:10.943 ], 00:20:10.943 "dhchap_dhgroups": [ 00:20:10.943 "null", 00:20:10.943 "ffdhe2048", 00:20:10.943 "ffdhe3072", 00:20:10.943 "ffdhe4096", 00:20:10.943 "ffdhe6144", 00:20:10.943 "ffdhe8192" 00:20:10.943 ] 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "nvmf_set_max_subsystems", 00:20:10.943 "params": { 00:20:10.943 "max_subsystems": 1024 00:20:10.943 } 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "method": "nvmf_set_crdt", 00:20:10.943 "params": { 00:20:10.943 "crdt1": 0, 00:20:10.943 "crdt2": 0, 00:20:10.943 "crdt3": 0 00:20:10.943 } 00:20:10.943 } 00:20:10.943 ] 00:20:10.943 }, 00:20:10.943 { 00:20:10.943 "subsystem": "iscsi", 00:20:10.943 "config": [ 00:20:10.944 { 00:20:10.944 "method": "iscsi_set_options", 00:20:10.944 "params": { 00:20:10.944 "node_base": "iqn.2016-06.io.spdk", 00:20:10.944 "max_sessions": 128, 00:20:10.944 "max_connections_per_session": 2, 00:20:10.944 "max_queue_depth": 64, 00:20:10.944 "default_time2wait": 2, 00:20:10.944 "default_time2retain": 20, 00:20:10.944 "first_burst_length": 8192, 00:20:10.944 "immediate_data": true, 00:20:10.944 "allow_duplicated_isid": false, 00:20:10.944 "error_recovery_level": 0, 00:20:10.944 "nop_timeout": 60, 00:20:10.944 "nop_in_interval": 30, 00:20:10.944 "disable_chap": false, 00:20:10.944 "require_chap": false, 00:20:10.944 "mutual_chap": false, 00:20:10.944 "chap_group": 0, 00:20:10.944 "max_large_datain_per_connection": 64, 00:20:10.944 "max_r2t_per_connection": 4, 00:20:10.944 "pdu_pool_size": 36864, 00:20:10.944 "immediate_data_pool_size": 16384, 00:20:10.944 "data_out_pool_size": 2048 00:20:10.944 } 00:20:10.944 } 00:20:10.944 ] 00:20:10.944 } 00:20:10.944 ] 00:20:10.944 }' 00:20:10.944 14:25:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 [2024-12-10 14:25:35.381361] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:10.944 [2024-12-10 14:25:35.381979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76534 ] 00:20:10.944 [2024-12-10 14:25:35.563265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.944 [2024-12-10 14:25:35.693524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.354 [2024-12-10 14:25:36.847690] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:12.354 [2024-12-10 14:25:36.848904] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:12.354 [2024-12-10 14:25:36.855825] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:12.354 [2024-12-10 14:25:36.855937] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:12.354 [2024-12-10 14:25:36.855952] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:12.354 [2024-12-10 14:25:36.855961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:12.354 [2024-12-10 14:25:36.864798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:12.354 [2024-12-10 14:25:36.864823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:12.354 [2024-12-10 14:25:36.871704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:12.354 [2024-12-10 14:25:36.871808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:12.354 [2024-12-10 14:25:36.888707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76534 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76534 ']' 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76534 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.354 14:25:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76534 00:20:12.354 14:25:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.354 killing process with pid 76534 00:20:12.354 14:25:37 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.354 14:25:37 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76534' 00:20:12.354 14:25:37 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76534 00:20:12.354 14:25:37 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76534 00:20:14.261 [2024-12-10 14:25:38.632080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:14.261 [2024-12-10 14:25:38.665713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:14.261 [2024-12-10 14:25:38.665855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:14.261 [2024-12-10 14:25:38.673712] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:14.261 [2024-12-10 14:25:38.673785] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:14.261 [2024-12-10 14:25:38.673795] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:14.261 [2024-12-10 14:25:38.673822] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:14.261 [2024-12-10 14:25:38.673983] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:16.166 14:25:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:16.166 ************************************ 00:20:16.166 END TEST test_save_ublk_config 00:20:16.166 ************************************ 00:20:16.166 00:20:16.166 real 0m11.318s 00:20:16.166 user 0m8.013s 00:20:16.166 sys 0m4.033s 00:20:16.166 14:25:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.166 14:25:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:16.166 14:25:40 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76625 00:20:16.166 14:25:40 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:16.166 14:25:40 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.166 14:25:40 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76625 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@835 -- # '[' -z 76625 ']' 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.166 14:25:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:16.166 [2024-12-10 14:25:40.791664] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:16.167 [2024-12-10 14:25:40.792009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76625 ] 00:20:16.167 [2024-12-10 14:25:40.972161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:16.426 [2024-12-10 14:25:41.099818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.426 [2024-12-10 14:25:41.099851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.364 14:25:42 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.364 14:25:42 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:17.364 14:25:42 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:17.364 14:25:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:17.364 14:25:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.364 14:25:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:17.364 ************************************ 00:20:17.364 START TEST test_create_ublk 00:20:17.364 ************************************ 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:17.364 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:17.364 [2024-12-10 14:25:42.090706] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:17.364 [2024-12-10 14:25:42.093864] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.364 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:17.364 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.364 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:17.623 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.623 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:17.623 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:17.623 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.623 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:17.623 [2024-12-10 14:25:42.429872] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:17.623 [2024-12-10 14:25:42.430408] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:17.623 [2024-12-10 14:25:42.430431] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:17.623 [2024-12-10 14:25:42.430440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:17.623 [2024-12-10 14:25:42.439133] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:17.623 [2024-12-10 14:25:42.439160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:17.623 [2024-12-10 14:25:42.445736] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:17.623 [2024-12-10 14:25:42.446373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:17.882 [2024-12-10 14:25:42.460733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:17.882 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:17.882 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.882 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:17.882 14:25:42 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:17.882 { 00:20:17.882 "ublk_device": "/dev/ublkb0", 00:20:17.882 "id": 0, 00:20:17.882 "queue_depth": 512, 00:20:17.882 "num_queues": 4, 00:20:17.882 "bdev_name": "Malloc0" 00:20:17.882 } 00:20:17.882 ]' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:17.882 14:25:42 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:17.882 14:25:42 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:17.883 14:25:42 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:17.883 14:25:42 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:17.883 14:25:42 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:17.883 14:25:42 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:17.883 14:25:42 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:18.142 fio: verification read phase will never start because write phase uses all of runtime 00:20:18.142 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:18.142 fio-3.35 00:20:18.142 Starting 1 process 00:20:28.122 00:20:28.122 fio_test: (groupid=0, jobs=1): err= 0: pid=76677: Tue Dec 10 14:25:52 2024 00:20:28.122 write: IOPS=6807, BW=26.6MiB/s (27.9MB/s)(266MiB/10001msec); 0 zone resets 00:20:28.123 clat (usec): min=41, max=4078, avg=146.09, stdev=113.76 00:20:28.123 lat (usec): min=42, max=4126, avg=146.55, stdev=113.78 00:20:28.123 clat percentiles (usec): 00:20:28.123 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 128], 20.00th=[ 137], 00:20:28.123 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:20:28.123 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:20:28.123 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 2507], 99.95th=[ 2966], 00:20:28.123 | 99.99th=[ 3720] 00:20:28.123 bw ( KiB/s): min=25744, max=49384, per=100.00%, avg=27333.05, stdev=5355.53, samples=19 00:20:28.123 iops : min= 6436, max=12346, avg=6833.26, stdev=1338.88, samples=19 00:20:28.123 lat (usec) : 50=5.94%, 100=1.27%, 250=92.54%, 500=0.01%, 750=0.02% 00:20:28.123 lat (usec) : 1000=0.02% 00:20:28.123 lat (msec) : 2=0.07%, 4=0.14%, 10=0.01% 00:20:28.123 cpu : usr=1.35%, sys=5.10%, ctx=68082, majf=0, minf=797 00:20:28.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.123 issued rwts: total=0,68082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:28.123 00:20:28.123 Run status group 0 (all jobs): 00:20:28.123 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=266MiB (279MB), run=10001-10001msec 00:20:28.123 00:20:28.123 Disk stats (read/write): 00:20:28.123 ublkb0: ios=0/67420, merge=0/0, ticks=0/9216, in_queue=9217, util=99.12% 00:20:28.382 14:25:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:28.382 14:25:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.382 14:25:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:28.382 [2024-12-10 14:25:52.967945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:28.382 [2024-12-10 14:25:53.016281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:28.382 [2024-12-10 14:25:53.017425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:28.382 [2024-12-10 14:25:53.021907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:28.382 [2024-12-10 14:25:53.022614] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:28.382 [2024-12-10 14:25:53.022632] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.382 14:25:53 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:28.382 [2024-12-10 14:25:53.045805] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:28.382 request: 00:20:28.382 { 00:20:28.382 "ublk_id": 0, 00:20:28.382 "method": "ublk_stop_disk", 00:20:28.382 "req_id": 1 00:20:28.382 } 00:20:28.382 Got JSON-RPC error response 00:20:28.382 response: 00:20:28.382 { 00:20:28.382 "code": -19, 00:20:28.382 "message": "No such device" 00:20:28.382 } 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.382 14:25:53 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:28.382 [2024-12-10 14:25:53.069791] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:28.382 [2024-12-10 14:25:53.077689] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:28.382 [2024-12-10 14:25:53.077732] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.382 14:25:53 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.382 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.321 14:25:53 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:29.321 ************************************ 00:20:29.321 END TEST test_create_ublk 00:20:29.321 ************************************ 00:20:29.321 14:25:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:29.321 00:20:29.321 real 0m11.903s 00:20:29.321 user 0m0.534s 00:20:29.321 sys 0m0.648s 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.321 14:25:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 14:25:54 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:29.321 14:25:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:29.321 14:25:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.321 14:25:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 ************************************ 00:20:29.321 START TEST test_create_multi_ublk 00:20:29.321 ************************************ 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.321 [2024-12-10 14:25:54.067708] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:29.321 [2024-12-10 14:25:54.070578] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.321 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.580 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.581 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:29.581 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:29.581 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.581 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.581 [2024-12-10 14:25:54.392891] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:29.581 [2024-12-10 14:25:54.393387] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:29.581 [2024-12-10 14:25:54.393399] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:29.581 [2024-12-10 14:25:54.393414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:29.581 [2024-12-10 14:25:54.402077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:29.581 [2024-12-10 14:25:54.402107] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:29.581 [2024-12-10 14:25:54.408743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:29.581 [2024-12-10 14:25:54.409389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:29.840 [2024-12-10 14:25:54.418186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.840 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.100 [2024-12-10 14:25:54.759869] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:30.100 [2024-12-10 14:25:54.760394] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:30.100 [2024-12-10 14:25:54.760416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:30.100 [2024-12-10 14:25:54.760425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:30.100 [2024-12-10 14:25:54.767740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:30.100 [2024-12-10 14:25:54.767768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:30.100 [2024-12-10 14:25:54.775739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:30.100 [2024-12-10 14:25:54.776415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:30.100 [2024-12-10 14:25:54.784768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.100 14:25:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.359 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.359 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:30.359 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:30.359 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.359 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.359 [2024-12-10 14:25:55.121832] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:30.359 [2024-12-10 14:25:55.122332] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:30.359 [2024-12-10 14:25:55.122349] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:30.359 [2024-12-10 14:25:55.122361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:30.360 [2024-12-10 14:25:55.129743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:30.360 [2024-12-10 14:25:55.129775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:30.360 [2024-12-10 14:25:55.137739] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:30.360 [2024-12-10 14:25:55.138413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:30.360 [2024-12-10 14:25:55.146761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.360 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 [2024-12-10 14:25:55.480893] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:30.927 [2024-12-10 14:25:55.481400] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:30.927 [2024-12-10 14:25:55.481422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:30.927 [2024-12-10 14:25:55.481431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:30.927 [2024-12-10 14:25:55.488748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:30.927 [2024-12-10 14:25:55.488771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:30.927 [2024-12-10 14:25:55.496718] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:30.927 [2024-12-10 14:25:55.497298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:30.927 [2024-12-10 14:25:55.505737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:30.927 { 00:20:30.927 "ublk_device": "/dev/ublkb0", 00:20:30.927 "id": 0, 00:20:30.927 "queue_depth": 512, 00:20:30.927 "num_queues": 4, 00:20:30.927 "bdev_name": "Malloc0" 00:20:30.927 }, 00:20:30.927 { 00:20:30.927 "ublk_device": "/dev/ublkb1", 00:20:30.927 "id": 1, 00:20:30.927 "queue_depth": 512, 00:20:30.927 "num_queues": 4, 00:20:30.927 "bdev_name": "Malloc1" 00:20:30.927 }, 00:20:30.927 { 00:20:30.927 "ublk_device": "/dev/ublkb2", 00:20:30.927 "id": 2, 00:20:30.927 "queue_depth": 512, 00:20:30.927 "num_queues": 4, 00:20:30.927 "bdev_name": "Malloc2" 00:20:30.927 }, 00:20:30.927 { 00:20:30.927 "ublk_device": "/dev/ublkb3", 00:20:30.927 "id": 3, 00:20:30.927 "queue_depth": 512, 00:20:30.927 "num_queues": 4, 00:20:30.927 "bdev_name": "Malloc3" 00:20:30.927 } 00:20:30.927 ]' 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:30.927 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.186 14:25:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:31.186 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:31.186 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:31.446 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:31.705 [2024-12-10 14:25:56.372804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:31.705 [2024-12-10 14:25:56.419277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:31.705 [2024-12-10 14:25:56.420595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:31.705 [2024-12-10 14:25:56.426818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:31.705 [2024-12-10 14:25:56.427138] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:31.705 [2024-12-10 14:25:56.427153] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:31.705 [2024-12-10 14:25:56.443807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:31.705 [2024-12-10 14:25:56.482308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:31.705 [2024-12-10 14:25:56.483598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:31.705 [2024-12-10 14:25:56.491715] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:31.705 [2024-12-10 14:25:56.492052] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:31.705 [2024-12-10 14:25:56.492067] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.705 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:31.705 [2024-12-10 14:25:56.499804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:31.705 [2024-12-10 14:25:56.531764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:31.705 [2024-12-10 14:25:56.532804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:31.964 [2024-12-10 14:25:56.543772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:31.964 [2024-12-10 14:25:56.544098] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:31.964 [2024-12-10 14:25:56.544117] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:31.964 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.964 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:31.964 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:31.964 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.964 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:31.965 [2024-12-10 14:25:56.561815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:31.965 [2024-12-10 14:25:56.599250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:31.965 [2024-12-10 14:25:56.600378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:31.965 [2024-12-10 14:25:56.609722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:31.965 [2024-12-10 14:25:56.610001] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:31.965 [2024-12-10 14:25:56.610014] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:31.965 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.965 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:32.224 [2024-12-10 14:25:56.804764] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:32.224 [2024-12-10 14:25:56.812709] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:32.224 [2024-12-10 14:25:56.812743] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:32.224 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:32.224 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:32.224 14:25:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:32.224 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.224 14:25:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:32.792 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.792 14:25:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:32.792 14:25:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:32.792 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.792 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:33.361 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.361 14:25:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:33.361 14:25:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:33.361 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.361 14:25:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:33.620 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.620 14:25:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:33.620 14:25:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:33.620 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.620 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:34.189 ************************************ 00:20:34.189 END TEST test_create_multi_ublk 00:20:34.189 ************************************ 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:34.189 00:20:34.189 real 0m4.802s 00:20:34.189 user 0m0.983s 00:20:34.189 sys 0m0.233s 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.189 14:25:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:34.189 14:25:58 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:34.189 14:25:58 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:34.189 14:25:58 ublk -- ublk/ublk.sh@130 -- # killprocess 76625 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@954 -- # '[' -z 76625 ']' 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@958 -- # kill -0 76625 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@959 -- # uname 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76625 00:20:34.189 killing process with pid 76625 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76625' 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@973 -- # kill 76625 00:20:34.189 14:25:58 ublk -- common/autotest_common.sh@978 -- # wait 76625 00:20:35.569 [2024-12-10 14:26:00.156030] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:35.569 [2024-12-10 14:26:00.156092] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:36.948 00:20:36.948 real 0m32.426s 00:20:36.948 user 0m45.616s 00:20:36.948 sys 0m10.293s 00:20:36.948 14:26:01 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.948 ************************************ 00:20:36.948 END TEST ublk 00:20:36.948 ************************************ 00:20:36.948 14:26:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:36.948 14:26:01 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:36.948 14:26:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.948 14:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.948 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:20:36.948 ************************************ 00:20:36.948 START TEST ublk_recovery 00:20:36.948 ************************************ 00:20:36.948 14:26:01 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:36.948 * Looking for test storage... 00:20:36.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:36.948 14:26:01 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:36.948 14:26:01 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:36.948 14:26:01 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:36.948 14:26:01 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.948 14:26:01 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:37.207 14:26:01 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.207 14:26:01 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.207 14:26:01 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.207 14:26:01 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:37.207 14:26:01 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.207 14:26:01 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:37.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.207 --rc genhtml_branch_coverage=1 00:20:37.207 --rc genhtml_function_coverage=1 00:20:37.207 --rc genhtml_legend=1 00:20:37.207 --rc geninfo_all_blocks=1 00:20:37.207 --rc geninfo_unexecuted_blocks=1 00:20:37.208 00:20:37.208 ' 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.208 --rc genhtml_branch_coverage=1 00:20:37.208 --rc genhtml_function_coverage=1 00:20:37.208 --rc genhtml_legend=1 00:20:37.208 --rc geninfo_all_blocks=1 00:20:37.208 --rc geninfo_unexecuted_blocks=1 00:20:37.208 00:20:37.208 ' 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.208 --rc genhtml_branch_coverage=1 00:20:37.208 --rc genhtml_function_coverage=1 00:20:37.208 --rc genhtml_legend=1 00:20:37.208 --rc geninfo_all_blocks=1 00:20:37.208 --rc geninfo_unexecuted_blocks=1 00:20:37.208 00:20:37.208 ' 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.208 --rc genhtml_branch_coverage=1 00:20:37.208 --rc genhtml_function_coverage=1 00:20:37.208 --rc genhtml_legend=1 00:20:37.208 --rc geninfo_all_blocks=1 00:20:37.208 --rc geninfo_unexecuted_blocks=1 00:20:37.208 00:20:37.208 ' 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:37.208 14:26:01 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77051 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.208 14:26:01 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77051 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77051 ']' 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.208 14:26:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.208 [2024-12-10 14:26:01.907473] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:37.208 [2024-12-10 14:26:01.907763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77051 ] 00:20:37.467 [2024-12-10 14:26:02.094730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:37.467 [2024-12-10 14:26:02.240151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.467 [2024-12-10 14:26:02.240183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:38.409 14:26:03 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.409 [2024-12-10 14:26:03.211696] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:38.409 [2024-12-10 14:26:03.214793] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.409 14:26:03 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.409 14:26:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.668 malloc0 00:20:38.668 14:26:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.668 14:26:03 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:38.668 14:26:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.668 14:26:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.668 [2024-12-10 14:26:03.391858] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:38.668 [2024-12-10 14:26:03.391995] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:38.668 [2024-12-10 14:26:03.392011] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:38.668 [2024-12-10 14:26:03.392021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:38.668 [2024-12-10 14:26:03.398735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:38.668 [2024-12-10 14:26:03.398761] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:38.668 [2024-12-10 14:26:03.406734] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:38.668 [2024-12-10 14:26:03.406892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:38.668 [2024-12-10 14:26:03.428711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:38.668 1 00:20:38.668 14:26:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.668 14:26:03 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:40.102 14:26:04 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77096 00:20:40.102 14:26:04 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:40.102 14:26:04 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:40.102 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:40.102 fio-3.35 00:20:40.102 Starting 1 process 00:20:45.399 14:26:09 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77051 00:20:45.399 14:26:09 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:50.676 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77051 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:50.676 14:26:14 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77204 00:20:50.676 14:26:14 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:50.676 14:26:14 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.676 14:26:14 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77204 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77204 ']' 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.676 14:26:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.676 [2024-12-10 14:26:14.565234] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:20:50.676 [2024-12-10 14:26:14.565346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77204 ] 00:20:50.676 [2024-12-10 14:26:14.744816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:50.676 [2024-12-10 14:26:14.873780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.676 [2024-12-10 14:26:14.873804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.244 14:26:15 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.244 14:26:15 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:51.244 14:26:15 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:51.244 14:26:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.244 14:26:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.244 [2024-12-10 14:26:15.865694] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:51.244 [2024-12-10 14:26:15.868689] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:51.244 14:26:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.245 14:26:15 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:51.245 14:26:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.245 14:26:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.245 malloc0 00:20:51.245 14:26:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.245 14:26:16 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:51.245 14:26:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.245 14:26:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.245 [2024-12-10 14:26:16.037853] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:51.245 [2024-12-10 14:26:16.037900] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:51.245 [2024-12-10 14:26:16.037914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:51.245 [2024-12-10 14:26:16.045760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:51.245 [2024-12-10 14:26:16.045789] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:51.245 [2024-12-10 14:26:16.045800] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:51.245 [2024-12-10 14:26:16.045909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:51.245 1 00:20:51.245 14:26:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.245 14:26:16 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77096 00:20:51.245 [2024-12-10 14:26:16.053704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:51.245 [2024-12-10 14:26:16.061286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:51.245 [2024-12-10 14:26:16.068883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:51.245 [2024-12-10 14:26:16.068907] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:47.506 00:21:47.506 fio_test: (groupid=0, jobs=1): err= 0: pid=77100: Tue Dec 10 14:27:04 2024 00:21:47.506 read: IOPS=20.7k, BW=80.9MiB/s (84.8MB/s)(4853MiB/60003msec) 00:21:47.506 slat (nsec): min=1876, max=583549, avg=7784.99, stdev=2577.97 00:21:47.506 clat (usec): min=1214, max=6631.2k, avg=3016.94, stdev=46058.14 00:21:47.506 lat (usec): min=1221, max=6631.2k, avg=3024.73, stdev=46058.14 00:21:47.506 clat percentiles (usec): 00:21:47.506 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2474], 00:21:47.506 | 30.00th=[ 2540], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:21:47.506 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2835], 95.00th=[ 3752], 00:21:47.506 | 99.00th=[ 5080], 99.50th=[ 5735], 99.90th=[ 6718], 99.95th=[ 8356], 00:21:47.506 | 99.99th=[13173] 00:21:47.506 bw ( KiB/s): min= 9868, max=96616, per=100.00%, avg=92127.12, stdev=10567.12, samples=107 00:21:47.506 iops : min= 2467, max=24154, avg=23031.76, stdev=2641.78, samples=107 00:21:47.506 write: IOPS=20.7k, BW=80.8MiB/s (84.7MB/s)(4849MiB/60003msec); 0 zone resets 00:21:47.506 slat (nsec): min=1875, max=850099, avg=7776.91, stdev=2602.88 00:21:47.506 clat (usec): min=1145, max=6631.3k, avg=3150.91, stdev=49049.70 00:21:47.506 lat (usec): min=1149, max=6631.3k, avg=3158.69, stdev=49049.71 00:21:47.506 clat percentiles (usec): 00:21:47.506 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2606], 00:21:47.506 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:21:47.506 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 3752], 00:21:47.506 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 6783], 99.95th=[ 8455], 00:21:47.506 | 99.99th=[13304] 00:21:47.506 bw ( KiB/s): min=10051, max=96496, per=100.00%, avg=92039.11, stdev=10492.12, samples=107 00:21:47.506 iops : min= 2512, max=24124, avg=23009.76, stdev=2623.08, samples=107 00:21:47.506 lat (msec) : 2=0.31%, 4=95.70%, 10=3.97%, 20=0.01%, >=2000=0.01% 00:21:47.506 cpu : usr=11.79%, sys=32.21%, ctx=108555, majf=0, minf=13 00:21:47.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:47.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:47.506 issued rwts: total=1242306,1241309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:47.506 00:21:47.506 Run status group 0 (all jobs): 00:21:47.506 READ: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=4853MiB (5088MB), run=60003-60003msec 00:21:47.506 WRITE: bw=80.8MiB/s (84.7MB/s), 80.8MiB/s-80.8MiB/s (84.7MB/s-84.7MB/s), io=4849MiB (5084MB), run=60003-60003msec 00:21:47.506 00:21:47.506 Disk stats (read/write): 00:21:47.506 ublkb1: ios=1239795/1238762, merge=0/0, ticks=3635791/3664213, in_queue=7300005, util=99.94% 00:21:47.506 14:27:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:47.506 14:27:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.506 14:27:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.506 [2024-12-10 14:27:04.731859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:47.506 [2024-12-10 14:27:04.770851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:47.506 [2024-12-10 14:27:04.771043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:47.506 [2024-12-10 14:27:04.776730] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:47.506 [2024-12-10 14:27:04.776861] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:47.506 [2024-12-10 14:27:04.776875] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:47.506 14:27:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.506 14:27:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:47.506 14:27:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.506 14:27:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.506 [2024-12-10 14:27:04.791830] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:47.506 [2024-12-10 14:27:04.800692] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:47.507 [2024-12-10 14:27:04.800729] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.507 14:27:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:47.507 14:27:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:47.507 14:27:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77204 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 77204 ']' 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 77204 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77204 00:21:47.507 killing process with pid 77204 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77204' 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@973 -- # kill 77204 00:21:47.507 14:27:04 ublk_recovery -- common/autotest_common.sh@978 -- # wait 77204 00:21:47.507 [2024-12-10 14:27:06.977911] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:47.507 [2024-12-10 14:27:06.978132] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:47.507 00:21:47.507 real 1m6.917s 00:21:47.507 user 1m50.524s 00:21:47.507 sys 0m38.670s 00:21:47.507 ************************************ 00:21:47.507 END TEST ublk_recovery 00:21:47.507 ************************************ 00:21:47.507 14:27:08 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.507 14:27:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.507 14:27:08 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:47.507 14:27:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:47.507 14:27:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.507 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:21:47.507 14:27:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:47.507 14:27:08 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:47.507 14:27:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.507 14:27:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.507 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:21:47.507 ************************************ 00:21:47.507 START TEST ftl 00:21:47.507 ************************************ 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:47.507 * Looking for test storage... 00:21:47.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.507 14:27:08 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.507 14:27:08 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.507 14:27:08 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.507 14:27:08 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.507 14:27:08 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.507 14:27:08 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:47.507 14:27:08 ftl -- scripts/common.sh@345 -- # : 1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.507 14:27:08 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.507 14:27:08 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@353 -- # local d=1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.507 14:27:08 ftl -- scripts/common.sh@355 -- # echo 1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.507 14:27:08 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@353 -- # local d=2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.507 14:27:08 ftl -- scripts/common.sh@355 -- # echo 2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.507 14:27:08 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.507 14:27:08 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.507 14:27:08 ftl -- scripts/common.sh@368 -- # return 0 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 14:27:08 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:47.507 14:27:08 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:47.507 14:27:08 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:47.507 14:27:08 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:47.507 14:27:08 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:47.507 14:27:08 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:47.507 14:27:08 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.507 14:27:08 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:47.507 14:27:08 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:47.507 14:27:08 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:47.507 14:27:08 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:47.507 14:27:08 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:47.507 14:27:08 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:47.507 14:27:08 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:47.507 14:27:08 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:47.507 14:27:08 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:47.507 14:27:08 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:47.507 14:27:08 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:47.507 14:27:08 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:47.507 14:27:08 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:47.507 14:27:08 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:47.507 14:27:08 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:47.507 14:27:08 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:47.507 14:27:08 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:47.507 14:27:08 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:47.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:47.507 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:47.507 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:47.507 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:47.507 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:47.507 14:27:09 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78010 00:21:47.507 14:27:09 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78010 00:21:47.507 14:27:09 ftl -- common/autotest_common.sh@835 -- # '[' -z 78010 ']' 00:21:47.508 14:27:09 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.508 14:27:09 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.508 14:27:09 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:47.508 14:27:09 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.508 14:27:09 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.508 14:27:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:47.508 [2024-12-10 14:27:09.947768] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:21:47.508 [2024-12-10 14:27:09.947910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78010 ] 00:21:47.508 [2024-12-10 14:27:10.135381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.508 [2024-12-10 14:27:10.261796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.508 14:27:10 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.508 14:27:10 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:47.508 14:27:10 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:47.508 14:27:10 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:47.508 14:27:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:47.508 14:27:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:47.767 14:27:12 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:47.767 14:27:12 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:47.767 14:27:12 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@50 -- # break 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:48.026 14:27:12 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:48.285 14:27:12 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:48.285 14:27:12 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:48.285 14:27:12 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:48.285 14:27:12 ftl -- ftl/ftl.sh@63 -- # break 00:21:48.285 14:27:12 ftl -- ftl/ftl.sh@66 -- # killprocess 78010 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@954 -- # '[' -z 78010 ']' 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@958 -- # kill -0 78010 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@959 -- # uname 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78010 00:21:48.285 killing process with pid 78010 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78010' 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@973 -- # kill 78010 00:21:48.285 14:27:12 ftl -- common/autotest_common.sh@978 -- # wait 78010 00:21:50.822 14:27:15 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:50.822 14:27:15 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:50.822 14:27:15 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:50.822 14:27:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.822 14:27:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:50.822 ************************************ 00:21:50.822 START TEST ftl_fio_basic 00:21:50.822 ************************************ 00:21:50.822 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:50.822 * Looking for test storage... 00:21:50.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.822 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:50.822 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:50.822 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.082 --rc genhtml_branch_coverage=1 00:21:51.082 --rc genhtml_function_coverage=1 00:21:51.082 --rc genhtml_legend=1 00:21:51.082 --rc geninfo_all_blocks=1 00:21:51.082 --rc geninfo_unexecuted_blocks=1 00:21:51.082 00:21:51.082 ' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.082 --rc genhtml_branch_coverage=1 00:21:51.082 --rc genhtml_function_coverage=1 00:21:51.082 --rc genhtml_legend=1 00:21:51.082 --rc geninfo_all_blocks=1 00:21:51.082 --rc geninfo_unexecuted_blocks=1 00:21:51.082 00:21:51.082 ' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.082 --rc genhtml_branch_coverage=1 00:21:51.082 --rc genhtml_function_coverage=1 00:21:51.082 --rc genhtml_legend=1 00:21:51.082 --rc geninfo_all_blocks=1 00:21:51.082 --rc geninfo_unexecuted_blocks=1 00:21:51.082 00:21:51.082 ' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.082 --rc genhtml_branch_coverage=1 00:21:51.082 --rc genhtml_function_coverage=1 00:21:51.082 --rc genhtml_legend=1 00:21:51.082 --rc geninfo_all_blocks=1 00:21:51.082 --rc geninfo_unexecuted_blocks=1 00:21:51.082 00:21:51.082 ' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:51.082 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78165 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78165 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 78165 ']' 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.083 14:27:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:51.083 [2024-12-10 14:27:15.898649] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:21:51.083 [2024-12-10 14:27:15.898805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78165 ] 00:21:51.342 [2024-12-10 14:27:16.085221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.601 [2024-12-10 14:27:16.228884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.601 [2024-12-10 14:27:16.228808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.601 [2024-12-10 14:27:16.228925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:52.539 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:52.798 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:53.057 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:53.057 { 00:21:53.057 "name": "nvme0n1", 00:21:53.057 "aliases": [ 00:21:53.057 "699e254f-0be1-4a3a-82f7-4ea8c83556e9" 00:21:53.057 ], 00:21:53.057 "product_name": "NVMe disk", 00:21:53.057 "block_size": 4096, 00:21:53.057 "num_blocks": 1310720, 00:21:53.057 "uuid": "699e254f-0be1-4a3a-82f7-4ea8c83556e9", 00:21:53.057 "numa_id": -1, 00:21:53.057 "assigned_rate_limits": { 00:21:53.057 "rw_ios_per_sec": 0, 00:21:53.057 "rw_mbytes_per_sec": 0, 00:21:53.057 "r_mbytes_per_sec": 0, 00:21:53.057 "w_mbytes_per_sec": 0 00:21:53.057 }, 00:21:53.057 "claimed": false, 00:21:53.057 "zoned": false, 00:21:53.057 "supported_io_types": { 00:21:53.057 "read": true, 00:21:53.057 "write": true, 00:21:53.057 "unmap": true, 00:21:53.057 "flush": true, 00:21:53.057 "reset": true, 00:21:53.057 "nvme_admin": true, 00:21:53.057 "nvme_io": true, 00:21:53.057 "nvme_io_md": false, 00:21:53.057 "write_zeroes": true, 00:21:53.057 "zcopy": false, 00:21:53.057 "get_zone_info": false, 00:21:53.057 "zone_management": false, 00:21:53.057 "zone_append": false, 00:21:53.057 "compare": true, 00:21:53.057 "compare_and_write": false, 00:21:53.057 "abort": true, 00:21:53.057 "seek_hole": false, 00:21:53.057 "seek_data": false, 00:21:53.057 "copy": true, 00:21:53.057 "nvme_iov_md": false 00:21:53.057 }, 00:21:53.057 "driver_specific": { 00:21:53.057 "nvme": [ 00:21:53.057 { 00:21:53.057 "pci_address": "0000:00:11.0", 00:21:53.057 "trid": { 00:21:53.057 "trtype": "PCIe", 00:21:53.057 "traddr": "0000:00:11.0" 00:21:53.057 }, 00:21:53.058 "ctrlr_data": { 00:21:53.058 "cntlid": 0, 00:21:53.058 "vendor_id": "0x1b36", 00:21:53.058 "model_number": "QEMU NVMe Ctrl", 00:21:53.058 "serial_number": "12341", 00:21:53.058 "firmware_revision": "8.0.0", 00:21:53.058 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:53.058 "oacs": { 00:21:53.058 "security": 0, 00:21:53.058 "format": 1, 00:21:53.058 "firmware": 0, 00:21:53.058 "ns_manage": 1 00:21:53.058 }, 00:21:53.058 "multi_ctrlr": false, 00:21:53.058 "ana_reporting": false 00:21:53.058 }, 00:21:53.058 "vs": { 00:21:53.058 "nvme_version": "1.4" 00:21:53.058 }, 00:21:53.058 "ns_data": { 00:21:53.058 "id": 1, 00:21:53.058 "can_share": false 00:21:53.058 } 00:21:53.058 } 00:21:53.058 ], 00:21:53.058 "mp_policy": "active_passive" 00:21:53.058 } 00:21:53.058 } 00:21:53.058 ]' 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:53.058 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:53.317 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:53.317 14:27:17 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:53.575 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e291842f-8226-4b1a-a386-229d4d06b95c 00:21:53.575 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e291842f-8226-4b1a-a386-229d4d06b95c 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.576 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:53.835 { 00:21:53.835 "name": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:53.835 "aliases": [ 00:21:53.835 "lvs/nvme0n1p0" 00:21:53.835 ], 00:21:53.835 "product_name": "Logical Volume", 00:21:53.835 "block_size": 4096, 00:21:53.835 "num_blocks": 26476544, 00:21:53.835 "uuid": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:53.835 "assigned_rate_limits": { 00:21:53.835 "rw_ios_per_sec": 0, 00:21:53.835 "rw_mbytes_per_sec": 0, 00:21:53.835 "r_mbytes_per_sec": 0, 00:21:53.835 "w_mbytes_per_sec": 0 00:21:53.835 }, 00:21:53.835 "claimed": false, 00:21:53.835 "zoned": false, 00:21:53.835 "supported_io_types": { 00:21:53.835 "read": true, 00:21:53.835 "write": true, 00:21:53.835 "unmap": true, 00:21:53.835 "flush": false, 00:21:53.835 "reset": true, 00:21:53.835 "nvme_admin": false, 00:21:53.835 "nvme_io": false, 00:21:53.835 "nvme_io_md": false, 00:21:53.835 "write_zeroes": true, 00:21:53.835 "zcopy": false, 00:21:53.835 "get_zone_info": false, 00:21:53.835 "zone_management": false, 00:21:53.835 "zone_append": false, 00:21:53.835 "compare": false, 00:21:53.835 "compare_and_write": false, 00:21:53.835 "abort": false, 00:21:53.835 "seek_hole": true, 00:21:53.835 "seek_data": true, 00:21:53.835 "copy": false, 00:21:53.835 "nvme_iov_md": false 00:21:53.835 }, 00:21:53.835 "driver_specific": { 00:21:53.835 "lvol": { 00:21:53.835 "lvol_store_uuid": "e291842f-8226-4b1a-a386-229d4d06b95c", 00:21:53.835 "base_bdev": "nvme0n1", 00:21:53.835 "thin_provision": true, 00:21:53.835 "num_allocated_clusters": 0, 00:21:53.835 "snapshot": false, 00:21:53.835 "clone": false, 00:21:53.835 "esnap_clone": false 00:21:53.835 } 00:21:53.835 } 00:21:53.835 } 00:21:53.835 ]' 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:53.835 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:54.094 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:54.354 14:27:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.354 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:54.354 { 00:21:54.354 "name": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:54.354 "aliases": [ 00:21:54.354 "lvs/nvme0n1p0" 00:21:54.354 ], 00:21:54.354 "product_name": "Logical Volume", 00:21:54.354 "block_size": 4096, 00:21:54.354 "num_blocks": 26476544, 00:21:54.354 "uuid": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:54.354 "assigned_rate_limits": { 00:21:54.354 "rw_ios_per_sec": 0, 00:21:54.354 "rw_mbytes_per_sec": 0, 00:21:54.354 "r_mbytes_per_sec": 0, 00:21:54.354 "w_mbytes_per_sec": 0 00:21:54.354 }, 00:21:54.354 "claimed": false, 00:21:54.354 "zoned": false, 00:21:54.354 "supported_io_types": { 00:21:54.354 "read": true, 00:21:54.354 "write": true, 00:21:54.354 "unmap": true, 00:21:54.354 "flush": false, 00:21:54.354 "reset": true, 00:21:54.354 "nvme_admin": false, 00:21:54.354 "nvme_io": false, 00:21:54.354 "nvme_io_md": false, 00:21:54.354 "write_zeroes": true, 00:21:54.354 "zcopy": false, 00:21:54.354 "get_zone_info": false, 00:21:54.354 "zone_management": false, 00:21:54.354 "zone_append": false, 00:21:54.354 "compare": false, 00:21:54.354 "compare_and_write": false, 00:21:54.354 "abort": false, 00:21:54.354 "seek_hole": true, 00:21:54.354 "seek_data": true, 00:21:54.354 "copy": false, 00:21:54.354 "nvme_iov_md": false 00:21:54.354 }, 00:21:54.354 "driver_specific": { 00:21:54.354 "lvol": { 00:21:54.354 "lvol_store_uuid": "e291842f-8226-4b1a-a386-229d4d06b95c", 00:21:54.354 "base_bdev": "nvme0n1", 00:21:54.354 "thin_provision": true, 00:21:54.354 "num_allocated_clusters": 0, 00:21:54.354 "snapshot": false, 00:21:54.354 "clone": false, 00:21:54.354 "esnap_clone": false 00:21:54.354 } 00:21:54.354 } 00:21:54.354 } 00:21:54.354 ]' 00:21:54.354 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:54.613 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:54.613 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31280f38-8fc4-4d19-948e-49e8f8350947 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:54.873 { 00:21:54.873 "name": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:54.873 "aliases": [ 00:21:54.873 "lvs/nvme0n1p0" 00:21:54.873 ], 00:21:54.873 "product_name": "Logical Volume", 00:21:54.873 "block_size": 4096, 00:21:54.873 "num_blocks": 26476544, 00:21:54.873 "uuid": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:21:54.873 "assigned_rate_limits": { 00:21:54.873 "rw_ios_per_sec": 0, 00:21:54.873 "rw_mbytes_per_sec": 0, 00:21:54.873 "r_mbytes_per_sec": 0, 00:21:54.873 "w_mbytes_per_sec": 0 00:21:54.873 }, 00:21:54.873 "claimed": false, 00:21:54.873 "zoned": false, 00:21:54.873 "supported_io_types": { 00:21:54.873 "read": true, 00:21:54.873 "write": true, 00:21:54.873 "unmap": true, 00:21:54.873 "flush": false, 00:21:54.873 "reset": true, 00:21:54.873 "nvme_admin": false, 00:21:54.873 "nvme_io": false, 00:21:54.873 "nvme_io_md": false, 00:21:54.873 "write_zeroes": true, 00:21:54.873 "zcopy": false, 00:21:54.873 "get_zone_info": false, 00:21:54.873 "zone_management": false, 00:21:54.873 "zone_append": false, 00:21:54.873 "compare": false, 00:21:54.873 "compare_and_write": false, 00:21:54.873 "abort": false, 00:21:54.873 "seek_hole": true, 00:21:54.873 "seek_data": true, 00:21:54.873 "copy": false, 00:21:54.873 "nvme_iov_md": false 00:21:54.873 }, 00:21:54.873 "driver_specific": { 00:21:54.873 "lvol": { 00:21:54.873 "lvol_store_uuid": "e291842f-8226-4b1a-a386-229d4d06b95c", 00:21:54.873 "base_bdev": "nvme0n1", 00:21:54.873 "thin_provision": true, 00:21:54.873 "num_allocated_clusters": 0, 00:21:54.873 "snapshot": false, 00:21:54.873 "clone": false, 00:21:54.873 "esnap_clone": false 00:21:54.873 } 00:21:54.873 } 00:21:54.873 } 00:21:54.873 ]' 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:54.873 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:55.134 14:27:19 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 31280f38-8fc4-4d19-948e-49e8f8350947 -c nvc0n1p0 --l2p_dram_limit 60 00:21:55.134 [2024-12-10 14:27:19.922103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.922158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.134 [2024-12-10 14:27:19.922180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:55.134 [2024-12-10 14:27:19.922191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.922299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.922316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.134 [2024-12-10 14:27:19.922333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:55.134 [2024-12-10 14:27:19.922343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.922399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.134 [2024-12-10 14:27:19.923562] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.134 [2024-12-10 14:27:19.923600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.923612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.134 [2024-12-10 14:27:19.923627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:21:55.134 [2024-12-10 14:27:19.923637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.923955] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f6776629-589e-4ed5-b209-7cc19a7f353b 00:21:55.134 [2024-12-10 14:27:19.926513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.926553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:55.134 [2024-12-10 14:27:19.926568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:55.134 [2024-12-10 14:27:19.926582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.940492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.940667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.134 [2024-12-10 14:27:19.940717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:21:55.134 [2024-12-10 14:27:19.940733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.940894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.940915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.134 [2024-12-10 14:27:19.940927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:55.134 [2024-12-10 14:27:19.940951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.941041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.941061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.134 [2024-12-10 14:27:19.941072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:55.134 [2024-12-10 14:27:19.941086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.941134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.134 [2024-12-10 14:27:19.947422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.947576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.134 [2024-12-10 14:27:19.947604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:21:55.134 [2024-12-10 14:27:19.947619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.947740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.947756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.134 [2024-12-10 14:27:19.947770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:55.134 [2024-12-10 14:27:19.947781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.134 [2024-12-10 14:27:19.947842] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:55.134 [2024-12-10 14:27:19.948022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.134 [2024-12-10 14:27:19.948048] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.134 [2024-12-10 14:27:19.948062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:55.134 [2024-12-10 14:27:19.948080] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.134 [2024-12-10 14:27:19.948092] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.134 [2024-12-10 14:27:19.948109] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:55.134 [2024-12-10 14:27:19.948120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.134 [2024-12-10 14:27:19.948133] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.134 [2024-12-10 14:27:19.948144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.134 [2024-12-10 14:27:19.948159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.134 [2024-12-10 14:27:19.948174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.134 [2024-12-10 14:27:19.948188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:55.135 [2024-12-10 14:27:19.948199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.135 [2024-12-10 14:27:19.948300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.135 [2024-12-10 14:27:19.948312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.135 [2024-12-10 14:27:19.948326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:55.135 [2024-12-10 14:27:19.948336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.135 [2024-12-10 14:27:19.948464] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.135 [2024-12-10 14:27:19.948477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.135 [2024-12-10 14:27:19.948495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.135 [2024-12-10 14:27:19.948529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.135 [2024-12-10 14:27:19.948565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.135 [2024-12-10 14:27:19.948593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.135 [2024-12-10 14:27:19.948602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:55.135 [2024-12-10 14:27:19.948615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.135 [2024-12-10 14:27:19.948625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.135 [2024-12-10 14:27:19.948638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:55.135 [2024-12-10 14:27:19.948648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.135 [2024-12-10 14:27:19.948686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.135 [2024-12-10 14:27:19.948721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.135 [2024-12-10 14:27:19.948753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.135 [2024-12-10 14:27:19.948798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.135 [2024-12-10 14:27:19.948828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.135 [2024-12-10 14:27:19.948848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.135 [2024-12-10 14:27:19.948862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.135 [2024-12-10 14:27:19.948902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.135 [2024-12-10 14:27:19.948911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:55.135 [2024-12-10 14:27:19.948923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.135 [2024-12-10 14:27:19.948933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.135 [2024-12-10 14:27:19.948945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:55.135 [2024-12-10 14:27:19.948954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.135 [2024-12-10 14:27:19.948974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:55.135 [2024-12-10 14:27:19.948989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.948997] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.135 [2024-12-10 14:27:19.949011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.135 [2024-12-10 14:27:19.949021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.135 [2024-12-10 14:27:19.949034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.135 [2024-12-10 14:27:19.949057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.135 [2024-12-10 14:27:19.949072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.135 [2024-12-10 14:27:19.949081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.135 [2024-12-10 14:27:19.949094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.135 [2024-12-10 14:27:19.949102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.135 [2024-12-10 14:27:19.949114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.135 [2024-12-10 14:27:19.949124] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.135 [2024-12-10 14:27:19.949139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:55.135 [2024-12-10 14:27:19.949164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:55.135 [2024-12-10 14:27:19.949173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:55.135 [2024-12-10 14:27:19.949186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:55.135 [2024-12-10 14:27:19.949196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:55.135 [2024-12-10 14:27:19.949210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:55.135 [2024-12-10 14:27:19.949220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:55.135 [2024-12-10 14:27:19.949233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:55.135 [2024-12-10 14:27:19.949243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:55.135 [2024-12-10 14:27:19.949259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:55.135 [2024-12-10 14:27:19.949311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.135 [2024-12-10 14:27:19.949325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.135 [2024-12-10 14:27:19.949351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.135 [2024-12-10 14:27:19.949360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.135 [2024-12-10 14:27:19.949375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.135 [2024-12-10 14:27:19.949385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.135 [2024-12-10 14:27:19.949398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.135 [2024-12-10 14:27:19.949408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:21:55.135 [2024-12-10 14:27:19.949419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.135 [2024-12-10 14:27:19.949539] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:55.135 [2024-12-10 14:27:19.949559] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:00.469 [2024-12-10 14:27:24.376987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.377078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:00.469 [2024-12-10 14:27:24.377098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4434.635 ms 00:22:00.469 [2024-12-10 14:27:24.377112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.423030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.423297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.469 [2024-12-10 14:27:24.423325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.636 ms 00:22:00.469 [2024-12-10 14:27:24.423341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.423519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.423540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.469 [2024-12-10 14:27:24.423553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:00.469 [2024-12-10 14:27:24.423571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.495318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.495362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.469 [2024-12-10 14:27:24.495400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.800 ms 00:22:00.469 [2024-12-10 14:27:24.495415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.495464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.495478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.469 [2024-12-10 14:27:24.495491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.469 [2024-12-10 14:27:24.495504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.496349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.496373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.469 [2024-12-10 14:27:24.496385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:22:00.469 [2024-12-10 14:27:24.496403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.496551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.496569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.469 [2024-12-10 14:27:24.496580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:00.469 [2024-12-10 14:27:24.496597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.522639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.522841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.469 [2024-12-10 14:27:24.522865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.042 ms 00:22:00.469 [2024-12-10 14:27:24.522880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.536303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:00.469 [2024-12-10 14:27:24.562115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.562300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:00.469 [2024-12-10 14:27:24.562350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.162 ms 00:22:00.469 [2024-12-10 14:27:24.562361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.661301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.661538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:00.469 [2024-12-10 14:27:24.661591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.036 ms 00:22:00.469 [2024-12-10 14:27:24.661603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.661899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.661917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:00.469 [2024-12-10 14:27:24.661937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:22:00.469 [2024-12-10 14:27:24.661948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.697221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.697260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:00.469 [2024-12-10 14:27:24.697277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.247 ms 00:22:00.469 [2024-12-10 14:27:24.697288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.731592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.731737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:00.469 [2024-12-10 14:27:24.731797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.300 ms 00:22:00.469 [2024-12-10 14:27:24.731808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.732702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.732727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.469 [2024-12-10 14:27:24.732743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:22:00.469 [2024-12-10 14:27:24.732753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.833572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.833762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:00.469 [2024-12-10 14:27:24.833796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.894 ms 00:22:00.469 [2024-12-10 14:27:24.833812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.871579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.871615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:00.469 [2024-12-10 14:27:24.871633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.701 ms 00:22:00.469 [2024-12-10 14:27:24.871659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.907426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.907462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:00.469 [2024-12-10 14:27:24.907478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.747 ms 00:22:00.469 [2024-12-10 14:27:24.907504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.942547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.942584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:00.469 [2024-12-10 14:27:24.942601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:22:00.469 [2024-12-10 14:27:24.942627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.942711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.942724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:00.469 [2024-12-10 14:27:24.942746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:00.469 [2024-12-10 14:27:24.942761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.942949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.469 [2024-12-10 14:27:24.942965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:00.469 [2024-12-10 14:27:24.942980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:00.469 [2024-12-10 14:27:24.942991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.469 [2024-12-10 14:27:24.944505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5030.033 ms, result 0 00:22:00.469 { 00:22:00.469 "name": "ftl0", 00:22:00.469 "uuid": "f6776629-589e-4ed5-b209-7cc19a7f353b" 00:22:00.469 } 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:00.469 14:27:24 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:00.469 14:27:25 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:00.728 [ 00:22:00.728 { 00:22:00.728 "name": "ftl0", 00:22:00.728 "aliases": [ 00:22:00.728 "f6776629-589e-4ed5-b209-7cc19a7f353b" 00:22:00.728 ], 00:22:00.728 "product_name": "FTL disk", 00:22:00.728 "block_size": 4096, 00:22:00.728 "num_blocks": 20971520, 00:22:00.728 "uuid": "f6776629-589e-4ed5-b209-7cc19a7f353b", 00:22:00.728 "assigned_rate_limits": { 00:22:00.728 "rw_ios_per_sec": 0, 00:22:00.728 "rw_mbytes_per_sec": 0, 00:22:00.728 "r_mbytes_per_sec": 0, 00:22:00.728 "w_mbytes_per_sec": 0 00:22:00.728 }, 00:22:00.728 "claimed": false, 00:22:00.728 "zoned": false, 00:22:00.728 "supported_io_types": { 00:22:00.728 "read": true, 00:22:00.728 "write": true, 00:22:00.728 "unmap": true, 00:22:00.728 "flush": true, 00:22:00.728 "reset": false, 00:22:00.728 "nvme_admin": false, 00:22:00.728 "nvme_io": false, 00:22:00.729 "nvme_io_md": false, 00:22:00.729 "write_zeroes": true, 00:22:00.729 "zcopy": false, 00:22:00.729 "get_zone_info": false, 00:22:00.729 "zone_management": false, 00:22:00.729 "zone_append": false, 00:22:00.729 "compare": false, 00:22:00.729 "compare_and_write": false, 00:22:00.729 "abort": false, 00:22:00.729 "seek_hole": false, 00:22:00.729 "seek_data": false, 00:22:00.729 "copy": false, 00:22:00.729 "nvme_iov_md": false 00:22:00.729 }, 00:22:00.729 "driver_specific": { 00:22:00.729 "ftl": { 00:22:00.729 "base_bdev": "31280f38-8fc4-4d19-948e-49e8f8350947", 00:22:00.729 "cache": "nvc0n1p0" 00:22:00.729 } 00:22:00.729 } 00:22:00.729 } 00:22:00.729 ] 00:22:00.729 14:27:25 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:22:00.729 14:27:25 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:00.729 14:27:25 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:00.988 14:27:25 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:00.988 14:27:25 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:00.988 [2024-12-10 14:27:25.743411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.743611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.988 [2024-12-10 14:27:25.743652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.988 [2024-12-10 14:27:25.743668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.743752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.988 [2024-12-10 14:27:25.748277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.748310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.988 [2024-12-10 14:27:25.748326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:22:00.988 [2024-12-10 14:27:25.748336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.748881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.748899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.988 [2024-12-10 14:27:25.748914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:22:00.988 [2024-12-10 14:27:25.748924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.751379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.751406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.988 [2024-12-10 14:27:25.751420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:22:00.988 [2024-12-10 14:27:25.751430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.756302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.756334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.988 [2024-12-10 14:27:25.756349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:22:00.988 [2024-12-10 14:27:25.756359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.792516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.792553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.988 [2024-12-10 14:27:25.792588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.105 ms 00:22:00.988 [2024-12-10 14:27:25.792598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.814724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.814763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.988 [2024-12-10 14:27:25.814784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.101 ms 00:22:00.988 [2024-12-10 14:27:25.814794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.988 [2024-12-10 14:27:25.815072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.988 [2024-12-10 14:27:25.815087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.988 [2024-12-10 14:27:25.815101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:22:00.988 [2024-12-10 14:27:25.815111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.249 [2024-12-10 14:27:25.850867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.249 [2024-12-10 14:27:25.850902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:01.249 [2024-12-10 14:27:25.850918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.777 ms 00:22:01.249 [2024-12-10 14:27:25.850928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.249 [2024-12-10 14:27:25.885532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.249 [2024-12-10 14:27:25.885691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:01.249 [2024-12-10 14:27:25.885718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.602 ms 00:22:01.249 [2024-12-10 14:27:25.885727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.249 [2024-12-10 14:27:25.920257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.249 [2024-12-10 14:27:25.920301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:01.249 [2024-12-10 14:27:25.920318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.479 ms 00:22:01.249 [2024-12-10 14:27:25.920327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.249 [2024-12-10 14:27:25.954367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.249 [2024-12-10 14:27:25.954497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.249 [2024-12-10 14:27:25.954538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.905 ms 00:22:01.249 [2024-12-10 14:27:25.954548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.249 [2024-12-10 14:27:25.954604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.249 [2024-12-10 14:27:25.954622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.954987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.955002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.955013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.955028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.249 [2024-12-10 14:27:25.955039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.955984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.250 [2024-12-10 14:27:25.956002] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.250 [2024-12-10 14:27:25.956014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6776629-589e-4ed5-b209-7cc19a7f353b 00:22:01.250 [2024-12-10 14:27:25.956026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:01.250 [2024-12-10 14:27:25.956042] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:01.250 [2024-12-10 14:27:25.956052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:01.250 [2024-12-10 14:27:25.956068] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:01.250 [2024-12-10 14:27:25.956078] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.250 [2024-12-10 14:27:25.956092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.250 [2024-12-10 14:27:25.956101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.250 [2024-12-10 14:27:25.956112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.250 [2024-12-10 14:27:25.956120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.250 [2024-12-10 14:27:25.956133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.250 [2024-12-10 14:27:25.956144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.250 [2024-12-10 14:27:25.956157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:22:01.250 [2024-12-10 14:27:25.956167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.250 [2024-12-10 14:27:25.976267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.250 [2024-12-10 14:27:25.976303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.250 [2024-12-10 14:27:25.976319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.056 ms 00:22:01.250 [2024-12-10 14:27:25.976329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.250 [2024-12-10 14:27:25.976953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.250 [2024-12-10 14:27:25.976968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.250 [2024-12-10 14:27:25.976983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:22:01.250 [2024-12-10 14:27:25.976993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.250 [2024-12-10 14:27:26.047070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.250 [2024-12-10 14:27:26.047239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.250 [2024-12-10 14:27:26.047266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.250 [2024-12-10 14:27:26.047277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.250 [2024-12-10 14:27:26.047358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.250 [2024-12-10 14:27:26.047369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.251 [2024-12-10 14:27:26.047383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.251 [2024-12-10 14:27:26.047393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.251 [2024-12-10 14:27:26.047526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.251 [2024-12-10 14:27:26.047545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.251 [2024-12-10 14:27:26.047559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.251 [2024-12-10 14:27:26.047570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.251 [2024-12-10 14:27:26.047612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.251 [2024-12-10 14:27:26.047624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.251 [2024-12-10 14:27:26.047637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.251 [2024-12-10 14:27:26.047647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.185751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.185814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.510 [2024-12-10 14:27:26.185849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.185861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.285600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.285654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.510 [2024-12-10 14:27:26.285707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.285719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.285903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.285917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.510 [2024-12-10 14:27:26.285936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.285947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.286057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.510 [2024-12-10 14:27:26.286072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.286082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.286251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.510 [2024-12-10 14:27:26.286266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.286281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.286364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.510 [2024-12-10 14:27:26.286378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.286389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.286471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.510 [2024-12-10 14:27:26.286485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.286500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.510 [2024-12-10 14:27:26.286585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.510 [2024-12-10 14:27:26.286599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.510 [2024-12-10 14:27:26.286610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.510 [2024-12-10 14:27:26.286861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.287 ms, result 0 00:22:01.510 true 00:22:01.510 14:27:26 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78165 00:22:01.510 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 78165 ']' 00:22:01.511 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 78165 00:22:01.511 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:01.511 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.511 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78165 00:22:01.770 killing process with pid 78165 00:22:01.770 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.770 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.770 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78165' 00:22:01.770 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 78165 00:22:01.770 14:27:26 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 78165 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:05.960 14:27:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:05.960 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:05.960 fio-3.35 00:22:05.960 Starting 1 thread 00:22:12.534 00:22:12.534 test: (groupid=0, jobs=1): err= 0: pid=78389: Tue Dec 10 14:27:36 2024 00:22:12.534 read: IOPS=846, BW=56.2MiB/s (58.9MB/s)(255MiB/4528msec) 00:22:12.534 slat (nsec): min=8494, max=41388, avg=12356.93, stdev=3058.13 00:22:12.534 clat (usec): min=339, max=761, avg=533.74, stdev=43.15 00:22:12.534 lat (usec): min=351, max=771, avg=546.10, stdev=43.59 00:22:12.534 clat percentiles (usec): 00:22:12.534 | 1.00th=[ 412], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 494], 00:22:12.534 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:22:12.534 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 594], 00:22:12.534 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 701], 00:22:12.534 | 99.99th=[ 758] 00:22:12.534 write: IOPS=852, BW=56.6MiB/s (59.4MB/s)(256MiB/4523msec); 0 zone resets 00:22:12.534 slat (usec): min=18, max=189, avg=32.15, stdev= 6.78 00:22:12.534 clat (usec): min=409, max=1157, avg=582.56, stdev=58.91 00:22:12.534 lat (usec): min=440, max=1189, avg=614.71, stdev=59.08 00:22:12.534 clat percentiles (usec): 00:22:12.534 | 1.00th=[ 482], 5.00th=[ 498], 10.00th=[ 515], 20.00th=[ 553], 00:22:12.534 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 586], 00:22:12.534 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 635], 95.00th=[ 652], 00:22:12.534 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1029], 99.95th=[ 1045], 00:22:12.534 | 99.99th=[ 1156] 00:22:12.534 bw ( KiB/s): min=56984, max=59024, per=100.00%, avg=58056.89, stdev=717.86, samples=9 00:22:12.534 iops : min= 838, max= 868, avg=853.78, stdev=10.56, samples=9 00:22:12.534 lat (usec) : 500=14.79%, 750=84.39%, 1000=0.74% 00:22:12.534 lat (msec) : 2=0.08% 00:22:12.534 cpu : usr=99.09%, sys=0.18%, ctx=11, majf=0, minf=1169 00:22:12.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.534 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.534 00:22:12.534 Run status group 0 (all jobs): 00:22:12.534 READ: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=255MiB (267MB), run=4528-4528msec 00:22:12.534 WRITE: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=256MiB (269MB), run=4523-4523msec 00:22:13.473 ----------------------------------------------------- 00:22:13.473 Suppressions used: 00:22:13.473 count bytes template 00:22:13.473 1 5 /usr/src/fio/parse.c 00:22:13.473 1 8 libtcmalloc_minimal.so 00:22:13.473 1 904 libcrypto.so 00:22:13.473 ----------------------------------------------------- 00:22:13.473 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:13.473 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:13.733 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:13.733 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:13.733 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:13.733 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:13.733 14:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:13.733 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:13.733 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:13.733 fio-3.35 00:22:13.733 Starting 2 threads 00:22:40.286 00:22:40.286 first_half: (groupid=0, jobs=1): err= 0: pid=78503: Tue Dec 10 14:28:04 2024 00:22:40.286 read: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(256MiB/24441msec) 00:22:40.286 slat (nsec): min=3560, max=38987, avg=7316.58, stdev=2991.90 00:22:40.286 clat (msec): min=10, max=210, avg=41.04, stdev=24.73 00:22:40.286 lat (msec): min=10, max=210, avg=41.04, stdev=24.73 00:22:40.286 clat percentiles (msec): 00:22:40.286 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:22:40.286 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:22:40.286 | 70.00th=[ 35], 80.00th=[ 41], 90.00th=[ 43], 95.00th=[ 77], 00:22:40.286 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 209], 00:22:40.286 | 99.99th=[ 211] 00:22:40.286 write: IOPS=2695, BW=10.5MiB/s (11.0MB/s)(256MiB/24309msec); 0 zone resets 00:22:40.286 slat (usec): min=4, max=1532, avg= 8.36, stdev=12.77 00:22:40.286 clat (usec): min=434, max=49365, avg=6703.01, stdev=4102.63 00:22:40.286 lat (usec): min=448, max=49382, avg=6711.37, stdev=4103.39 00:22:40.286 clat percentiles (usec): 00:22:40.286 | 1.00th=[ 1156], 5.00th=[ 1991], 10.00th=[ 2999], 20.00th=[ 3818], 00:22:40.286 | 30.00th=[ 4948], 40.00th=[ 5407], 50.00th=[ 6063], 60.00th=[ 6521], 00:22:40.286 | 70.00th=[ 6915], 80.00th=[ 8160], 90.00th=[12256], 95.00th=[13566], 00:22:40.286 | 99.00th=[23200], 99.50th=[30540], 99.90th=[36963], 99.95th=[38011], 00:22:40.286 | 99.99th=[44827] 00:22:40.286 bw ( KiB/s): min= 360, max=44320, per=100.00%, avg=22627.83, stdev=13973.14, samples=23 00:22:40.286 iops : min= 90, max=11080, avg=5656.96, stdev=3493.28, samples=23 00:22:40.286 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.21% 00:22:40.286 lat (msec) : 2=2.26%, 4=8.70%, 10=30.74%, 20=7.51%, 50=46.58% 00:22:40.286 lat (msec) : 100=2.09%, 250=1.85% 00:22:40.286 cpu : usr=99.23%, sys=0.23%, ctx=38, majf=0, minf=5569 00:22:40.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:40.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.286 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:40.286 issued rwts: total=65481,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:40.286 second_half: (groupid=0, jobs=1): err= 0: pid=78504: Tue Dec 10 14:28:04 2024 00:22:40.286 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(256MiB/24644msec) 00:22:40.286 slat (nsec): min=3560, max=92907, avg=12113.74, stdev=4250.58 00:22:40.286 clat (usec): min=824, max=276282, avg=40317.02, stdev=26069.64 00:22:40.286 lat (usec): min=828, max=276294, avg=40329.14, stdev=26070.20 00:22:40.286 clat percentiles (msec): 00:22:40.286 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:22:40.286 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:22:40.286 | 70.00th=[ 35], 80.00th=[ 40], 90.00th=[ 42], 95.00th=[ 82], 00:22:40.286 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 218], 99.95th=[ 236], 00:22:40.286 | 99.99th=[ 271] 00:22:40.287 write: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(256MiB/24608msec); 0 zone resets 00:22:40.287 slat (usec): min=4, max=477, avg=11.15, stdev= 6.89 00:22:40.287 clat (usec): min=358, max=50128, avg=7804.44, stdev=7605.63 00:22:40.287 lat (usec): min=370, max=50142, avg=7815.59, stdev=7605.73 00:22:40.287 clat percentiles (usec): 00:22:40.287 | 1.00th=[ 1074], 5.00th=[ 1401], 10.00th=[ 1762], 20.00th=[ 2835], 00:22:40.287 | 30.00th=[ 4178], 40.00th=[ 5407], 50.00th=[ 6325], 60.00th=[ 7242], 00:22:40.287 | 70.00th=[ 7963], 80.00th=[ 9503], 90.00th=[13173], 95.00th=[24773], 00:22:40.287 | 99.00th=[41157], 99.50th=[42730], 99.90th=[47449], 99.95th=[49021], 00:22:40.287 | 99.99th=[49546] 00:22:40.287 bw ( KiB/s): min= 2568, max=52808, per=100.00%, avg=21756.67, stdev=13673.46, samples=24 00:22:40.287 iops : min= 642, max=13202, avg=5439.17, stdev=3418.37, samples=24 00:22:40.287 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.25% 00:22:40.287 lat (msec) : 2=6.35%, 4=7.52%, 10=27.89%, 20=6.66%, 50=47.76% 00:22:40.287 lat (msec) : 100=1.40%, 250=2.12%, 500=0.02% 00:22:40.287 cpu : usr=99.07%, sys=0.13%, ctx=41, majf=0, minf=5536 00:22:40.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:40.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.287 complete : 0=0.0%, 4=98.7%, 8=1.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:40.287 issued rwts: total=65479,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:40.287 00:22:40.287 Run status group 0 (all jobs): 00:22:40.287 READ: bw=20.8MiB/s (21.8MB/s), 10.4MiB/s-10.5MiB/s (10.9MB/s-11.0MB/s), io=512MiB (536MB), run=24441-24644msec 00:22:40.287 WRITE: bw=20.8MiB/s (21.8MB/s), 10.4MiB/s-10.5MiB/s (10.9MB/s-11.0MB/s), io=512MiB (537MB), run=24309-24608msec 00:22:42.822 ----------------------------------------------------- 00:22:42.822 Suppressions used: 00:22:42.822 count bytes template 00:22:42.822 2 10 /usr/src/fio/parse.c 00:22:42.822 2 192 /usr/src/fio/iolog.c 00:22:42.822 1 8 libtcmalloc_minimal.so 00:22:42.822 1 904 libcrypto.so 00:22:42.822 ----------------------------------------------------- 00:22:42.822 00:22:42.822 14:28:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:42.822 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:42.823 14:28:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:42.823 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:42.823 fio-3.35 00:22:42.823 Starting 1 thread 00:23:00.919 00:23:00.919 test: (groupid=0, jobs=1): err= 0: pid=78829: Tue Dec 10 14:28:24 2024 00:23:00.919 read: IOPS=7157, BW=28.0MiB/s (29.3MB/s)(255MiB/9109msec) 00:23:00.919 slat (nsec): min=3374, max=41929, avg=9122.01, stdev=4434.21 00:23:00.919 clat (usec): min=764, max=34157, avg=17866.83, stdev=1096.99 00:23:00.919 lat (usec): min=768, max=34162, avg=17875.95, stdev=1096.89 00:23:00.919 clat percentiles (usec): 00:23:00.919 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17433], 20.00th=[17433], 00:23:00.919 | 30.00th=[17695], 40.00th=[17695], 50.00th=[17695], 60.00th=[17957], 00:23:00.919 | 70.00th=[17957], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:23:00.919 | 99.00th=[21103], 99.50th=[26346], 99.90th=[29492], 99.95th=[30016], 00:23:00.919 | 99.99th=[33817] 00:23:00.919 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(256MiB/6256msec); 0 zone resets 00:23:00.919 slat (usec): min=4, max=1728, avg=10.09, stdev=12.39 00:23:00.919 clat (usec): min=651, max=59836, avg=12160.62, stdev=13882.87 00:23:00.919 lat (usec): min=658, max=59844, avg=12170.72, stdev=13882.84 00:23:00.919 clat percentiles (usec): 00:23:00.919 | 1.00th=[ 1012], 5.00th=[ 1221], 10.00th=[ 1385], 20.00th=[ 1631], 00:23:00.919 | 30.00th=[ 1876], 40.00th=[ 2573], 50.00th=[ 8225], 60.00th=[10290], 00:23:00.919 | 70.00th=[12649], 80.00th=[17171], 90.00th=[39060], 95.00th=[41681], 00:23:00.919 | 99.00th=[52167], 99.50th=[53740], 99.90th=[56886], 99.95th=[57934], 00:23:00.919 | 99.99th=[58983] 00:23:00.919 bw ( KiB/s): min=18688, max=53912, per=96.24%, avg=40329.85, stdev=8551.02, samples=13 00:23:00.919 iops : min= 4672, max=13478, avg=10082.46, stdev=2137.76, samples=13 00:23:00.919 lat (usec) : 750=0.01%, 1000=0.45% 00:23:00.919 lat (msec) : 2=16.35%, 4=4.23%, 10=8.48%, 20=60.41%, 50=9.16% 00:23:00.919 lat (msec) : 100=0.91% 00:23:00.919 cpu : usr=98.85%, sys=0.27%, ctx=27, majf=0, minf=5565 00:23:00.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:00.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.919 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:00.919 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:00.919 00:23:00.919 Run status group 0 (all jobs): 00:23:00.919 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=255MiB (267MB), run=9109-9109msec 00:23:00.919 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=256MiB (268MB), run=6256-6256msec 00:23:01.869 ----------------------------------------------------- 00:23:01.869 Suppressions used: 00:23:01.869 count bytes template 00:23:01.869 1 5 /usr/src/fio/parse.c 00:23:01.869 2 192 /usr/src/fio/iolog.c 00:23:01.869 1 8 libtcmalloc_minimal.so 00:23:01.869 1 904 libcrypto.so 00:23:01.869 ----------------------------------------------------- 00:23:01.869 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:01.869 Remove shared memory files 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58966 /dev/shm/spdk_tgt_trace.pid77051 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:01.869 ************************************ 00:23:01.869 END TEST ftl_fio_basic 00:23:01.869 ************************************ 00:23:01.869 00:23:01.869 real 1m11.174s 00:23:01.869 user 2m31.229s 00:23:01.869 sys 0m4.481s 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.869 14:28:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:02.128 14:28:26 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:02.128 14:28:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:02.128 14:28:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.128 14:28:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:02.128 ************************************ 00:23:02.128 START TEST ftl_bdevperf 00:23:02.128 ************************************ 00:23:02.128 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:02.128 * Looking for test storage... 00:23:02.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:02.128 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.128 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.128 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.388 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.389 --rc genhtml_branch_coverage=1 00:23:02.389 --rc genhtml_function_coverage=1 00:23:02.389 --rc genhtml_legend=1 00:23:02.389 --rc geninfo_all_blocks=1 00:23:02.389 --rc geninfo_unexecuted_blocks=1 00:23:02.389 00:23:02.389 ' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.389 --rc genhtml_branch_coverage=1 00:23:02.389 --rc genhtml_function_coverage=1 00:23:02.389 --rc genhtml_legend=1 00:23:02.389 --rc geninfo_all_blocks=1 00:23:02.389 --rc geninfo_unexecuted_blocks=1 00:23:02.389 00:23:02.389 ' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.389 --rc genhtml_branch_coverage=1 00:23:02.389 --rc genhtml_function_coverage=1 00:23:02.389 --rc genhtml_legend=1 00:23:02.389 --rc geninfo_all_blocks=1 00:23:02.389 --rc geninfo_unexecuted_blocks=1 00:23:02.389 00:23:02.389 ' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.389 --rc genhtml_branch_coverage=1 00:23:02.389 --rc genhtml_function_coverage=1 00:23:02.389 --rc genhtml_legend=1 00:23:02.389 --rc geninfo_all_blocks=1 00:23:02.389 --rc geninfo_unexecuted_blocks=1 00:23:02.389 00:23:02.389 ' 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:02.389 14:28:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=79098 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 79098 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 79098 ']' 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.389 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.389 [2024-12-10 14:28:27.124781] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:23:02.389 [2024-12-10 14:28:27.125102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79098 ] 00:23:02.649 [2024-12-10 14:28:27.305291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.649 [2024-12-10 14:28:27.432468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:03.217 14:28:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:03.476 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:03.735 { 00:23:03.735 "name": "nvme0n1", 00:23:03.735 "aliases": [ 00:23:03.735 "0d90fe56-f461-4b27-8566-366357af1c98" 00:23:03.735 ], 00:23:03.735 "product_name": "NVMe disk", 00:23:03.735 "block_size": 4096, 00:23:03.735 "num_blocks": 1310720, 00:23:03.735 "uuid": "0d90fe56-f461-4b27-8566-366357af1c98", 00:23:03.735 "numa_id": -1, 00:23:03.735 "assigned_rate_limits": { 00:23:03.735 "rw_ios_per_sec": 0, 00:23:03.735 "rw_mbytes_per_sec": 0, 00:23:03.735 "r_mbytes_per_sec": 0, 00:23:03.735 "w_mbytes_per_sec": 0 00:23:03.735 }, 00:23:03.735 "claimed": true, 00:23:03.735 "claim_type": "read_many_write_one", 00:23:03.735 "zoned": false, 00:23:03.735 "supported_io_types": { 00:23:03.735 "read": true, 00:23:03.735 "write": true, 00:23:03.735 "unmap": true, 00:23:03.735 "flush": true, 00:23:03.735 "reset": true, 00:23:03.735 "nvme_admin": true, 00:23:03.735 "nvme_io": true, 00:23:03.735 "nvme_io_md": false, 00:23:03.735 "write_zeroes": true, 00:23:03.735 "zcopy": false, 00:23:03.735 "get_zone_info": false, 00:23:03.735 "zone_management": false, 00:23:03.735 "zone_append": false, 00:23:03.735 "compare": true, 00:23:03.735 "compare_and_write": false, 00:23:03.735 "abort": true, 00:23:03.735 "seek_hole": false, 00:23:03.735 "seek_data": false, 00:23:03.735 "copy": true, 00:23:03.735 "nvme_iov_md": false 00:23:03.735 }, 00:23:03.735 "driver_specific": { 00:23:03.735 "nvme": [ 00:23:03.735 { 00:23:03.735 "pci_address": "0000:00:11.0", 00:23:03.735 "trid": { 00:23:03.735 "trtype": "PCIe", 00:23:03.735 "traddr": "0000:00:11.0" 00:23:03.735 }, 00:23:03.735 "ctrlr_data": { 00:23:03.735 "cntlid": 0, 00:23:03.735 "vendor_id": "0x1b36", 00:23:03.735 "model_number": "QEMU NVMe Ctrl", 00:23:03.735 "serial_number": "12341", 00:23:03.735 "firmware_revision": "8.0.0", 00:23:03.735 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:03.735 "oacs": { 00:23:03.735 "security": 0, 00:23:03.735 "format": 1, 00:23:03.735 "firmware": 0, 00:23:03.735 "ns_manage": 1 00:23:03.735 }, 00:23:03.735 "multi_ctrlr": false, 00:23:03.735 "ana_reporting": false 00:23:03.735 }, 00:23:03.735 "vs": { 00:23:03.735 "nvme_version": "1.4" 00:23:03.735 }, 00:23:03.735 "ns_data": { 00:23:03.735 "id": 1, 00:23:03.735 "can_share": false 00:23:03.735 } 00:23:03.735 } 00:23:03.735 ], 00:23:03.735 "mp_policy": "active_passive" 00:23:03.735 } 00:23:03.735 } 00:23:03.735 ]' 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:03.735 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:03.994 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e291842f-8226-4b1a-a386-229d4d06b95c 00:23:03.994 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:03.994 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e291842f-8226-4b1a-a386-229d4d06b95c 00:23:04.254 14:28:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f7ce9367-f511-4778-807f-120b32ac5469 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f7ce9367-f511-4778-807f-120b32ac5469 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.513 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:04.772 { 00:23:04.772 "name": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:04.772 "aliases": [ 00:23:04.772 "lvs/nvme0n1p0" 00:23:04.772 ], 00:23:04.772 "product_name": "Logical Volume", 00:23:04.772 "block_size": 4096, 00:23:04.772 "num_blocks": 26476544, 00:23:04.772 "uuid": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:04.772 "assigned_rate_limits": { 00:23:04.772 "rw_ios_per_sec": 0, 00:23:04.772 "rw_mbytes_per_sec": 0, 00:23:04.772 "r_mbytes_per_sec": 0, 00:23:04.772 "w_mbytes_per_sec": 0 00:23:04.772 }, 00:23:04.772 "claimed": false, 00:23:04.772 "zoned": false, 00:23:04.772 "supported_io_types": { 00:23:04.772 "read": true, 00:23:04.772 "write": true, 00:23:04.772 "unmap": true, 00:23:04.772 "flush": false, 00:23:04.772 "reset": true, 00:23:04.772 "nvme_admin": false, 00:23:04.772 "nvme_io": false, 00:23:04.772 "nvme_io_md": false, 00:23:04.772 "write_zeroes": true, 00:23:04.772 "zcopy": false, 00:23:04.772 "get_zone_info": false, 00:23:04.772 "zone_management": false, 00:23:04.772 "zone_append": false, 00:23:04.772 "compare": false, 00:23:04.772 "compare_and_write": false, 00:23:04.772 "abort": false, 00:23:04.772 "seek_hole": true, 00:23:04.772 "seek_data": true, 00:23:04.772 "copy": false, 00:23:04.772 "nvme_iov_md": false 00:23:04.772 }, 00:23:04.772 "driver_specific": { 00:23:04.772 "lvol": { 00:23:04.772 "lvol_store_uuid": "f7ce9367-f511-4778-807f-120b32ac5469", 00:23:04.772 "base_bdev": "nvme0n1", 00:23:04.772 "thin_provision": true, 00:23:04.772 "num_allocated_clusters": 0, 00:23:04.772 "snapshot": false, 00:23:04.772 "clone": false, 00:23:04.772 "esnap_clone": false 00:23:04.772 } 00:23:04.772 } 00:23:04.772 } 00:23:04.772 ]' 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:04.772 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:05.031 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:05.290 14:28:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.290 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:05.290 { 00:23:05.290 "name": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:05.290 "aliases": [ 00:23:05.290 "lvs/nvme0n1p0" 00:23:05.290 ], 00:23:05.290 "product_name": "Logical Volume", 00:23:05.290 "block_size": 4096, 00:23:05.290 "num_blocks": 26476544, 00:23:05.290 "uuid": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:05.290 "assigned_rate_limits": { 00:23:05.290 "rw_ios_per_sec": 0, 00:23:05.290 "rw_mbytes_per_sec": 0, 00:23:05.290 "r_mbytes_per_sec": 0, 00:23:05.290 "w_mbytes_per_sec": 0 00:23:05.290 }, 00:23:05.290 "claimed": false, 00:23:05.290 "zoned": false, 00:23:05.290 "supported_io_types": { 00:23:05.290 "read": true, 00:23:05.290 "write": true, 00:23:05.290 "unmap": true, 00:23:05.290 "flush": false, 00:23:05.290 "reset": true, 00:23:05.290 "nvme_admin": false, 00:23:05.290 "nvme_io": false, 00:23:05.290 "nvme_io_md": false, 00:23:05.290 "write_zeroes": true, 00:23:05.290 "zcopy": false, 00:23:05.290 "get_zone_info": false, 00:23:05.290 "zone_management": false, 00:23:05.290 "zone_append": false, 00:23:05.290 "compare": false, 00:23:05.290 "compare_and_write": false, 00:23:05.290 "abort": false, 00:23:05.290 "seek_hole": true, 00:23:05.290 "seek_data": true, 00:23:05.290 "copy": false, 00:23:05.290 "nvme_iov_md": false 00:23:05.290 }, 00:23:05.290 "driver_specific": { 00:23:05.290 "lvol": { 00:23:05.290 "lvol_store_uuid": "f7ce9367-f511-4778-807f-120b32ac5469", 00:23:05.290 "base_bdev": "nvme0n1", 00:23:05.290 "thin_provision": true, 00:23:05.290 "num_allocated_clusters": 0, 00:23:05.290 "snapshot": false, 00:23:05.290 "clone": false, 00:23:05.290 "esnap_clone": false 00:23:05.290 } 00:23:05.290 } 00:23:05.290 } 00:23:05.290 ]' 00:23:05.290 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:05.549 14:28:30 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:05.808 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 00:23:05.809 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:05.809 { 00:23:05.809 "name": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:05.809 "aliases": [ 00:23:05.809 "lvs/nvme0n1p0" 00:23:05.809 ], 00:23:05.809 "product_name": "Logical Volume", 00:23:05.809 "block_size": 4096, 00:23:05.809 "num_blocks": 26476544, 00:23:05.809 "uuid": "35e12d76-05a8-4f6e-bdf5-42ff4d91eb61", 00:23:05.809 "assigned_rate_limits": { 00:23:05.809 "rw_ios_per_sec": 0, 00:23:05.809 "rw_mbytes_per_sec": 0, 00:23:05.809 "r_mbytes_per_sec": 0, 00:23:05.809 "w_mbytes_per_sec": 0 00:23:05.809 }, 00:23:05.809 "claimed": false, 00:23:05.809 "zoned": false, 00:23:05.809 "supported_io_types": { 00:23:05.809 "read": true, 00:23:05.809 "write": true, 00:23:05.809 "unmap": true, 00:23:05.809 "flush": false, 00:23:05.809 "reset": true, 00:23:05.809 "nvme_admin": false, 00:23:05.809 "nvme_io": false, 00:23:05.809 "nvme_io_md": false, 00:23:05.809 "write_zeroes": true, 00:23:05.809 "zcopy": false, 00:23:05.809 "get_zone_info": false, 00:23:05.809 "zone_management": false, 00:23:05.809 "zone_append": false, 00:23:05.809 "compare": false, 00:23:05.809 "compare_and_write": false, 00:23:05.809 "abort": false, 00:23:05.809 "seek_hole": true, 00:23:05.809 "seek_data": true, 00:23:05.809 "copy": false, 00:23:05.809 "nvme_iov_md": false 00:23:05.809 }, 00:23:05.809 "driver_specific": { 00:23:05.809 "lvol": { 00:23:05.809 "lvol_store_uuid": "f7ce9367-f511-4778-807f-120b32ac5469", 00:23:05.809 "base_bdev": "nvme0n1", 00:23:05.809 "thin_provision": true, 00:23:05.809 "num_allocated_clusters": 0, 00:23:05.809 "snapshot": false, 00:23:05.809 "clone": false, 00:23:05.809 "esnap_clone": false 00:23:05.809 } 00:23:05.809 } 00:23:05.809 } 00:23:05.809 ]' 00:23:05.809 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:05.809 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:05.809 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:06.069 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:06.069 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:06.069 14:28:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:06.069 14:28:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:06.069 14:28:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35e12d76-05a8-4f6e-bdf5-42ff4d91eb61 -c nvc0n1p0 --l2p_dram_limit 20 00:23:06.069 [2024-12-10 14:28:30.837725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.837783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:06.069 [2024-12-10 14:28:30.837800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:06.069 [2024-12-10 14:28:30.837814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.837893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.837909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.069 [2024-12-10 14:28:30.837920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:06.069 [2024-12-10 14:28:30.837934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.837953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:06.069 [2024-12-10 14:28:30.838930] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:06.069 [2024-12-10 14:28:30.838960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.838975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.069 [2024-12-10 14:28:30.838988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:23:06.069 [2024-12-10 14:28:30.839003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.839075] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6c69a39f-3dbe-492c-b63a-0c226c59b12e 00:23:06.069 [2024-12-10 14:28:30.841493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.841528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:06.069 [2024-12-10 14:28:30.841549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:06.069 [2024-12-10 14:28:30.841560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.855456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.855486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.069 [2024-12-10 14:28:30.855502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.861 ms 00:23:06.069 [2024-12-10 14:28:30.855517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.855622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.855636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.069 [2024-12-10 14:28:30.855656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:06.069 [2024-12-10 14:28:30.855666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.855748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.855760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:06.069 [2024-12-10 14:28:30.855773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:06.069 [2024-12-10 14:28:30.855784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.855813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:06.069 [2024-12-10 14:28:30.861703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.861739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.069 [2024-12-10 14:28:30.861751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.914 ms 00:23:06.069 [2024-12-10 14:28:30.861768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.861804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.861818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:06.069 [2024-12-10 14:28:30.861829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:06.069 [2024-12-10 14:28:30.861842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.861874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:06.069 [2024-12-10 14:28:30.862016] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:06.069 [2024-12-10 14:28:30.862032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:06.069 [2024-12-10 14:28:30.862049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:06.069 [2024-12-10 14:28:30.862062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:06.069 [2024-12-10 14:28:30.862079] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:06.069 [2024-12-10 14:28:30.862091] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:06.069 [2024-12-10 14:28:30.862104] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:06.069 [2024-12-10 14:28:30.862113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:06.069 [2024-12-10 14:28:30.862126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:06.069 [2024-12-10 14:28:30.862140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.862153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:06.069 [2024-12-10 14:28:30.862163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:23:06.069 [2024-12-10 14:28:30.862186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.862257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.069 [2024-12-10 14:28:30.862271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:06.069 [2024-12-10 14:28:30.862280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:06.069 [2024-12-10 14:28:30.862296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.069 [2024-12-10 14:28:30.862371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:06.069 [2024-12-10 14:28:30.862390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:06.069 [2024-12-10 14:28:30.862401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.069 [2024-12-10 14:28:30.862415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.069 [2024-12-10 14:28:30.862425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:06.069 [2024-12-10 14:28:30.862437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:06.069 [2024-12-10 14:28:30.862446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:06.069 [2024-12-10 14:28:30.862459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:06.069 [2024-12-10 14:28:30.862468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:06.069 [2024-12-10 14:28:30.862480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.069 [2024-12-10 14:28:30.862489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:06.070 [2024-12-10 14:28:30.862511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:06.070 [2024-12-10 14:28:30.862519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.070 [2024-12-10 14:28:30.862533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:06.070 [2024-12-10 14:28:30.862543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:06.070 [2024-12-10 14:28:30.862558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:06.070 [2024-12-10 14:28:30.862579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:06.070 [2024-12-10 14:28:30.862610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:06.070 [2024-12-10 14:28:30.862643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:06.070 [2024-12-10 14:28:30.862686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:06.070 [2024-12-10 14:28:30.862718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:06.070 [2024-12-10 14:28:30.862751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.070 [2024-12-10 14:28:30.862771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:06.070 [2024-12-10 14:28:30.862783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:06.070 [2024-12-10 14:28:30.862792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.070 [2024-12-10 14:28:30.862803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:06.070 [2024-12-10 14:28:30.862811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:06.070 [2024-12-10 14:28:30.862822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:06.070 [2024-12-10 14:28:30.862842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:06.070 [2024-12-10 14:28:30.862850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862861] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:06.070 [2024-12-10 14:28:30.862871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:06.070 [2024-12-10 14:28:30.862884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.070 [2024-12-10 14:28:30.862912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:06.070 [2024-12-10 14:28:30.862921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:06.070 [2024-12-10 14:28:30.862933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:06.070 [2024-12-10 14:28:30.862942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:06.070 [2024-12-10 14:28:30.862954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:06.070 [2024-12-10 14:28:30.862963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:06.070 [2024-12-10 14:28:30.862977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:06.070 [2024-12-10 14:28:30.862989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:06.070 [2024-12-10 14:28:30.863013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:06.070 [2024-12-10 14:28:30.863026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:06.070 [2024-12-10 14:28:30.863036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:06.070 [2024-12-10 14:28:30.863051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:06.070 [2024-12-10 14:28:30.863061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:06.070 [2024-12-10 14:28:30.863074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:06.070 [2024-12-10 14:28:30.863084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:06.070 [2024-12-10 14:28:30.863099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:06.070 [2024-12-10 14:28:30.863109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:06.070 [2024-12-10 14:28:30.863166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:06.070 [2024-12-10 14:28:30.863177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:06.070 [2024-12-10 14:28:30.863204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:06.070 [2024-12-10 14:28:30.863217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:06.070 [2024-12-10 14:28:30.863226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:06.070 [2024-12-10 14:28:30.863240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.070 [2024-12-10 14:28:30.863250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:06.070 [2024-12-10 14:28:30.863265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:23:06.070 [2024-12-10 14:28:30.863275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.070 [2024-12-10 14:28:30.863317] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:06.070 [2024-12-10 14:28:30.863329] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:10.264 [2024-12-10 14:28:34.845643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.264 [2024-12-10 14:28:34.845742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:10.264 [2024-12-10 14:28:34.845764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3988.791 ms 00:23:10.264 [2024-12-10 14:28:34.845775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.264 [2024-12-10 14:28:34.892054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.264 [2024-12-10 14:28:34.892307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.265 [2024-12-10 14:28:34.892357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.018 ms 00:23:10.265 [2024-12-10 14:28:34.892370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.892512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.892526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:10.265 [2024-12-10 14:28:34.892546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:10.265 [2024-12-10 14:28:34.892557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.956653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.956707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:10.265 [2024-12-10 14:28:34.956726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.136 ms 00:23:10.265 [2024-12-10 14:28:34.956738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.956783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.956794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:10.265 [2024-12-10 14:28:34.956808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:10.265 [2024-12-10 14:28:34.956823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.957651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.957673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:10.265 [2024-12-10 14:28:34.957910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:23:10.265 [2024-12-10 14:28:34.957936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.958069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.958083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:10.265 [2024-12-10 14:28:34.958101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:10.265 [2024-12-10 14:28:34.958112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.980386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:34.980423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:10.265 [2024-12-10 14:28:34.980440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.283 ms 00:23:10.265 [2024-12-10 14:28:34.980463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.265 [2024-12-10 14:28:34.994217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:10.265 [2024-12-10 14:28:35.003637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.265 [2024-12-10 14:28:35.003798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:10.265 [2024-12-10 14:28:35.003838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:23:10.265 [2024-12-10 14:28:35.003852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.106140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.106209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:10.523 [2024-12-10 14:28:35.106226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.423 ms 00:23:10.523 [2024-12-10 14:28:35.106241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.106440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.106462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:10.523 [2024-12-10 14:28:35.106474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:10.523 [2024-12-10 14:28:35.106492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.141494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.141722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:10.523 [2024-12-10 14:28:35.141746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.987 ms 00:23:10.523 [2024-12-10 14:28:35.141761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.175188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.175230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:10.523 [2024-12-10 14:28:35.175245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.443 ms 00:23:10.523 [2024-12-10 14:28:35.175258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.175992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.176019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:10.523 [2024-12-10 14:28:35.176031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:23:10.523 [2024-12-10 14:28:35.176043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.274522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.274572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:10.523 [2024-12-10 14:28:35.274603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.588 ms 00:23:10.523 [2024-12-10 14:28:35.274617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.310808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.310849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:10.523 [2024-12-10 14:28:35.310867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.140 ms 00:23:10.523 [2024-12-10 14:28:35.310881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.523 [2024-12-10 14:28:35.344825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.523 [2024-12-10 14:28:35.344881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:10.523 [2024-12-10 14:28:35.344895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.961 ms 00:23:10.523 [2024-12-10 14:28:35.344907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.782 [2024-12-10 14:28:35.380472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.782 [2024-12-10 14:28:35.380519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:10.782 [2024-12-10 14:28:35.380532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.585 ms 00:23:10.782 [2024-12-10 14:28:35.380545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.782 [2024-12-10 14:28:35.380588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.782 [2024-12-10 14:28:35.380607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:10.782 [2024-12-10 14:28:35.380618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:10.782 [2024-12-10 14:28:35.380631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.782 [2024-12-10 14:28:35.380749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.782 [2024-12-10 14:28:35.380766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:10.782 [2024-12-10 14:28:35.380778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:10.782 [2024-12-10 14:28:35.380813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.782 [2024-12-10 14:28:35.382190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4551.335 ms, result 0 00:23:10.782 { 00:23:10.782 "name": "ftl0", 00:23:10.782 "uuid": "6c69a39f-3dbe-492c-b63a-0c226c59b12e" 00:23:10.782 } 00:23:10.782 14:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:10.782 14:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:10.782 14:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:10.782 14:28:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:11.041 [2024-12-10 14:28:35.697709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:11.041 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:11.041 Zero copy mechanism will not be used. 00:23:11.041 Running I/O for 4 seconds... 00:23:12.917 1362.00 IOPS, 90.45 MiB/s [2024-12-10T14:28:39.132Z] 1356.00 IOPS, 90.05 MiB/s [2024-12-10T14:28:39.701Z] 1374.33 IOPS, 91.26 MiB/s [2024-12-10T14:28:39.960Z] 1392.75 IOPS, 92.49 MiB/s 00:23:15.126 Latency(us) 00:23:15.126 [2024-12-10T14:28:39.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.126 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:15.126 ftl0 : 4.00 1392.40 92.46 0.00 0.00 751.97 289.52 22634.92 00:23:15.126 [2024-12-10T14:28:39.960Z] =================================================================================================================== 00:23:15.126 [2024-12-10T14:28:39.960Z] Total : 1392.40 92.46 0.00 0.00 751.97 289.52 22634.92 00:23:15.126 { 00:23:15.126 "results": [ 00:23:15.126 { 00:23:15.126 "job": "ftl0", 00:23:15.126 "core_mask": "0x1", 00:23:15.126 "workload": "randwrite", 00:23:15.126 "status": "finished", 00:23:15.126 "queue_depth": 1, 00:23:15.126 "io_size": 69632, 00:23:15.126 "runtime": 4.001727, 00:23:15.126 "iops": 1392.3988318043685, 00:23:15.126 "mibps": 92.46398492450885, 00:23:15.126 "io_failed": 0, 00:23:15.126 "io_timeout": 0, 00:23:15.126 "avg_latency_us": 751.971188126519, 00:23:15.126 "min_latency_us": 289.5164658634538, 00:23:15.126 "max_latency_us": 22634.923694779118 00:23:15.126 } 00:23:15.126 ], 00:23:15.126 "core_count": 1 00:23:15.126 } 00:23:15.126 [2024-12-10 14:28:39.701931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:15.126 14:28:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:15.126 [2024-12-10 14:28:39.819864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:15.126 Running I/O for 4 seconds... 00:23:16.998 11315.00 IOPS, 44.20 MiB/s [2024-12-10T14:28:43.209Z] 10963.50 IOPS, 42.83 MiB/s [2024-12-10T14:28:44.144Z] 10753.67 IOPS, 42.01 MiB/s [2024-12-10T14:28:44.144Z] 10967.75 IOPS, 42.84 MiB/s 00:23:19.310 Latency(us) 00:23:19.310 [2024-12-10T14:28:44.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.310 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.310 ftl0 : 4.01 10960.25 42.81 0.00 0.00 11656.91 241.81 34320.86 00:23:19.310 [2024-12-10T14:28:44.144Z] =================================================================================================================== 00:23:19.310 [2024-12-10T14:28:44.144Z] Total : 10960.25 42.81 0.00 0.00 11656.91 0.00 34320.86 00:23:19.310 [2024-12-10 14:28:43.837407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:19.310 { 00:23:19.310 "results": [ 00:23:19.310 { 00:23:19.310 "job": "ftl0", 00:23:19.310 "core_mask": "0x1", 00:23:19.310 "workload": "randwrite", 00:23:19.310 "status": "finished", 00:23:19.310 "queue_depth": 128, 00:23:19.310 "io_size": 4096, 00:23:19.310 "runtime": 4.014324, 00:23:19.310 "iops": 10960.251339951634, 00:23:19.310 "mibps": 42.81348179668607, 00:23:19.310 "io_failed": 0, 00:23:19.310 "io_timeout": 0, 00:23:19.310 "avg_latency_us": 11656.905704549185, 00:23:19.310 "min_latency_us": 241.81204819277107, 00:23:19.310 "max_latency_us": 34320.86104417671 00:23:19.310 } 00:23:19.310 ], 00:23:19.310 "core_count": 1 00:23:19.310 } 00:23:19.310 14:28:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:19.310 [2024-12-10 14:28:43.962109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:19.310 Running I/O for 4 seconds... 00:23:21.178 8918.00 IOPS, 34.84 MiB/s [2024-12-10T14:28:47.387Z] 8967.50 IOPS, 35.03 MiB/s [2024-12-10T14:28:48.326Z] 9002.33 IOPS, 35.17 MiB/s [2024-12-10T14:28:48.326Z] 9023.75 IOPS, 35.25 MiB/s 00:23:23.492 Latency(us) 00:23:23.492 [2024-12-10T14:28:48.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.492 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:23.492 Verification LBA range: start 0x0 length 0x1400000 00:23:23.492 ftl0 : 4.01 9034.18 35.29 0.00 0.00 14127.25 253.33 23792.99 00:23:23.492 [2024-12-10T14:28:48.326Z] =================================================================================================================== 00:23:23.492 [2024-12-10T14:28:48.326Z] Total : 9034.18 35.29 0.00 0.00 14127.25 0.00 23792.99 00:23:23.492 [2024-12-10 14:28:47.985609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:23.492 { 00:23:23.492 "results": [ 00:23:23.492 { 00:23:23.492 "job": "ftl0", 00:23:23.492 "core_mask": "0x1", 00:23:23.492 "workload": "verify", 00:23:23.492 "status": "finished", 00:23:23.492 "verify_range": { 00:23:23.492 "start": 0, 00:23:23.492 "length": 20971520 00:23:23.492 }, 00:23:23.492 "queue_depth": 128, 00:23:23.492 "io_size": 4096, 00:23:23.492 "runtime": 4.009549, 00:23:23.492 "iops": 9034.183146284033, 00:23:23.492 "mibps": 35.289777915172, 00:23:23.492 "io_failed": 0, 00:23:23.492 "io_timeout": 0, 00:23:23.492 "avg_latency_us": 14127.246698590736, 00:23:23.492 "min_latency_us": 253.3269076305221, 00:23:23.492 "max_latency_us": 23792.98955823293 00:23:23.492 } 00:23:23.492 ], 00:23:23.492 "core_count": 1 00:23:23.492 } 00:23:23.492 14:28:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:23.492 [2024-12-10 14:28:48.189735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.492 [2024-12-10 14:28:48.189784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.492 [2024-12-10 14:28:48.189799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:23.492 [2024-12-10 14:28:48.189812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.492 [2024-12-10 14:28:48.189835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.492 [2024-12-10 14:28:48.194176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.492 [2024-12-10 14:28:48.194206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.492 [2024-12-10 14:28:48.194221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.327 ms 00:23:23.492 [2024-12-10 14:28:48.194231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.492 [2024-12-10 14:28:48.196342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.492 [2024-12-10 14:28:48.196383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.492 [2024-12-10 14:28:48.196404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:23:23.492 [2024-12-10 14:28:48.196415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.411322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.411375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.813 [2024-12-10 14:28:48.411399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 215.229 ms 00:23:23.813 [2024-12-10 14:28:48.411411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.416329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.416362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.813 [2024-12-10 14:28:48.416378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:23:23.813 [2024-12-10 14:28:48.416393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.451948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.451986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.813 [2024-12-10 14:28:48.452003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.541 ms 00:23:23.813 [2024-12-10 14:28:48.452030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.474741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.474784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.813 [2024-12-10 14:28:48.474802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.703 ms 00:23:23.813 [2024-12-10 14:28:48.474813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.474967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.474982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.813 [2024-12-10 14:28:48.475000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:23.813 [2024-12-10 14:28:48.475011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.509699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.509733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.813 [2024-12-10 14:28:48.509750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.721 ms 00:23:23.813 [2024-12-10 14:28:48.509759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.543504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.543538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.813 [2024-12-10 14:28:48.543554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.760 ms 00:23:23.813 [2024-12-10 14:28:48.543562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.577456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.577492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.813 [2024-12-10 14:28:48.577520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.903 ms 00:23:23.813 [2024-12-10 14:28:48.577529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.613627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-12-10 14:28:48.613785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.813 [2024-12-10 14:28:48.613817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.065 ms 00:23:23.813 [2024-12-10 14:28:48.613828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-12-10 14:28:48.613867] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.813 [2024-12-10 14:28:48.613886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.613990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-12-10 14:28:48.614436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.614988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-12-10 14:28:48.615202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.814 [2024-12-10 14:28:48.615216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c69a39f-3dbe-492c-b63a-0c226c59b12e 00:23:23.814 [2024-12-10 14:28:48.615231] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:23.814 [2024-12-10 14:28:48.615245] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:23.814 [2024-12-10 14:28:48.615255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:23.814 [2024-12-10 14:28:48.615269] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:23.814 [2024-12-10 14:28:48.615280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.814 [2024-12-10 14:28:48.615294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.814 [2024-12-10 14:28:48.615304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.814 [2024-12-10 14:28:48.615320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.814 [2024-12-10 14:28:48.615329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.814 [2024-12-10 14:28:48.615343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.814 [2024-12-10 14:28:48.615354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.814 [2024-12-10 14:28:48.615369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:23:23.814 [2024-12-10 14:28:48.615379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.636173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.074 [2024-12-10 14:28:48.636337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:24.074 [2024-12-10 14:28:48.636364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:23:24.074 [2024-12-10 14:28:48.636375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.637011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.074 [2024-12-10 14:28:48.637030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:24.074 [2024-12-10 14:28:48.637045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:23:24.074 [2024-12-10 14:28:48.637056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.693455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.074 [2024-12-10 14:28:48.693492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.074 [2024-12-10 14:28:48.693512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.074 [2024-12-10 14:28:48.693523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.693586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.074 [2024-12-10 14:28:48.693597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.074 [2024-12-10 14:28:48.693611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.074 [2024-12-10 14:28:48.693621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.693725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.074 [2024-12-10 14:28:48.693739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.074 [2024-12-10 14:28:48.693752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.074 [2024-12-10 14:28:48.693762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.693783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.074 [2024-12-10 14:28:48.693794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.074 [2024-12-10 14:28:48.693808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.074 [2024-12-10 14:28:48.693818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.074 [2024-12-10 14:28:48.818483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.074 [2024-12-10 14:28:48.818539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.074 [2024-12-10 14:28:48.818562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.074 [2024-12-10 14:28:48.818572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.334 [2024-12-10 14:28:48.919166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.334 [2024-12-10 14:28:48.919344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.334 [2024-12-10 14:28:48.919441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.334 [2024-12-10 14:28:48.919616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:24.334 [2024-12-10 14:28:48.919718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.334 [2024-12-10 14:28:48.919806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.919898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.334 [2024-12-10 14:28:48.919910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.334 [2024-12-10 14:28:48.919924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.334 [2024-12-10 14:28:48.919934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.334 [2024-12-10 14:28:48.920095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 731.485 ms, result 0 00:23:24.334 true 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 79098 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 79098 ']' 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 79098 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79098 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79098' 00:23:24.334 killing process with pid 79098 00:23:24.334 Received shutdown signal, test time was about 4.000000 seconds 00:23:24.334 00:23:24.334 Latency(us) 00:23:24.334 [2024-12-10T14:28:49.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.334 [2024-12-10T14:28:49.168Z] =================================================================================================================== 00:23:24.334 [2024-12-10T14:28:49.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 79098 00:23:24.334 14:28:48 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 79098 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:25.713 Remove shared memory files 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:25.713 ************************************ 00:23:25.713 END TEST ftl_bdevperf 00:23:25.713 ************************************ 00:23:25.713 00:23:25.713 real 0m23.720s 00:23:25.713 user 0m26.082s 00:23:25.713 sys 0m1.304s 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.713 14:28:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:25.713 14:28:50 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:25.713 14:28:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:25.713 14:28:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.713 14:28:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:25.973 ************************************ 00:23:25.973 START TEST ftl_trim 00:23:25.973 ************************************ 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:25.973 * Looking for test storage... 00:23:25.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.973 14:28:50 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:25.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.973 --rc genhtml_branch_coverage=1 00:23:25.973 --rc genhtml_function_coverage=1 00:23:25.973 --rc genhtml_legend=1 00:23:25.973 --rc geninfo_all_blocks=1 00:23:25.973 --rc geninfo_unexecuted_blocks=1 00:23:25.973 00:23:25.973 ' 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:25.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.973 --rc genhtml_branch_coverage=1 00:23:25.973 --rc genhtml_function_coverage=1 00:23:25.973 --rc genhtml_legend=1 00:23:25.973 --rc geninfo_all_blocks=1 00:23:25.973 --rc geninfo_unexecuted_blocks=1 00:23:25.973 00:23:25.973 ' 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:25.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.973 --rc genhtml_branch_coverage=1 00:23:25.973 --rc genhtml_function_coverage=1 00:23:25.973 --rc genhtml_legend=1 00:23:25.973 --rc geninfo_all_blocks=1 00:23:25.973 --rc geninfo_unexecuted_blocks=1 00:23:25.973 00:23:25.973 ' 00:23:25.973 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:25.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.973 --rc genhtml_branch_coverage=1 00:23:25.973 --rc genhtml_function_coverage=1 00:23:25.973 --rc genhtml_legend=1 00:23:25.973 --rc geninfo_all_blocks=1 00:23:25.973 --rc geninfo_unexecuted_blocks=1 00:23:25.973 00:23:25.973 ' 00:23:25.973 14:28:50 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:25.973 14:28:50 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:25.973 14:28:50 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:25.973 14:28:50 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:25.973 14:28:50 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79456 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:26.233 14:28:50 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79456 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79456 ']' 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.233 14:28:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:26.233 [2024-12-10 14:28:50.936266] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:23:26.233 [2024-12-10 14:28:50.936594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79456 ] 00:23:26.492 [2024-12-10 14:28:51.124218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:26.492 [2024-12-10 14:28:51.262982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.492 [2024-12-10 14:28:51.263127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.492 [2024-12-10 14:28:51.263165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.429 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.429 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:27.429 14:28:52 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:27.998 14:28:52 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:27.998 14:28:52 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:27.998 14:28:52 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:27.998 { 00:23:27.998 "name": "nvme0n1", 00:23:27.998 "aliases": [ 00:23:27.998 "4920494c-ce51-4ffa-8eee-66f91bb159d5" 00:23:27.998 ], 00:23:27.998 "product_name": "NVMe disk", 00:23:27.998 "block_size": 4096, 00:23:27.998 "num_blocks": 1310720, 00:23:27.998 "uuid": "4920494c-ce51-4ffa-8eee-66f91bb159d5", 00:23:27.998 "numa_id": -1, 00:23:27.998 "assigned_rate_limits": { 00:23:27.998 "rw_ios_per_sec": 0, 00:23:27.998 "rw_mbytes_per_sec": 0, 00:23:27.998 "r_mbytes_per_sec": 0, 00:23:27.998 "w_mbytes_per_sec": 0 00:23:27.998 }, 00:23:27.998 "claimed": true, 00:23:27.998 "claim_type": "read_many_write_one", 00:23:27.998 "zoned": false, 00:23:27.998 "supported_io_types": { 00:23:27.998 "read": true, 00:23:27.998 "write": true, 00:23:27.998 "unmap": true, 00:23:27.998 "flush": true, 00:23:27.998 "reset": true, 00:23:27.998 "nvme_admin": true, 00:23:27.998 "nvme_io": true, 00:23:27.998 "nvme_io_md": false, 00:23:27.998 "write_zeroes": true, 00:23:27.998 "zcopy": false, 00:23:27.998 "get_zone_info": false, 00:23:27.998 "zone_management": false, 00:23:27.998 "zone_append": false, 00:23:27.998 "compare": true, 00:23:27.998 "compare_and_write": false, 00:23:27.998 "abort": true, 00:23:27.998 "seek_hole": false, 00:23:27.998 "seek_data": false, 00:23:27.998 "copy": true, 00:23:27.998 "nvme_iov_md": false 00:23:27.998 }, 00:23:27.998 "driver_specific": { 00:23:27.998 "nvme": [ 00:23:27.998 { 00:23:27.998 "pci_address": "0000:00:11.0", 00:23:27.998 "trid": { 00:23:27.998 "trtype": "PCIe", 00:23:27.998 "traddr": "0000:00:11.0" 00:23:27.998 }, 00:23:27.998 "ctrlr_data": { 00:23:27.998 "cntlid": 0, 00:23:27.998 "vendor_id": "0x1b36", 00:23:27.998 "model_number": "QEMU NVMe Ctrl", 00:23:27.998 "serial_number": "12341", 00:23:27.998 "firmware_revision": "8.0.0", 00:23:27.998 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:27.998 "oacs": { 00:23:27.998 "security": 0, 00:23:27.998 "format": 1, 00:23:27.998 "firmware": 0, 00:23:27.998 "ns_manage": 1 00:23:27.998 }, 00:23:27.998 "multi_ctrlr": false, 00:23:27.998 "ana_reporting": false 00:23:27.998 }, 00:23:27.998 "vs": { 00:23:27.998 "nvme_version": "1.4" 00:23:27.998 }, 00:23:27.998 "ns_data": { 00:23:27.998 "id": 1, 00:23:27.998 "can_share": false 00:23:27.998 } 00:23:27.998 } 00:23:27.998 ], 00:23:27.998 "mp_policy": "active_passive" 00:23:27.998 } 00:23:27.998 } 00:23:27.998 ]' 00:23:27.998 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:28.257 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:28.258 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:28.258 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:28.258 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:28.258 14:28:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:28.258 14:28:52 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:28.258 14:28:52 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:28.258 14:28:52 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:28.258 14:28:52 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:28.258 14:28:52 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:28.516 14:28:53 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f7ce9367-f511-4778-807f-120b32ac5469 00:23:28.516 14:28:53 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:28.516 14:28:53 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7ce9367-f511-4778-807f-120b32ac5469 00:23:28.516 14:28:53 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:28.774 14:28:53 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=50e26e82-a171-4d19-82ed-098b8e2f3c0c 00:23:28.774 14:28:53 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 50e26e82-a171-4d19-82ed-098b8e2f3c0c 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:29.033 14:28:53 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.033 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.033 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:29.033 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:29.033 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:29.033 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.293 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.293 { 00:23:29.293 "name": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:29.293 "aliases": [ 00:23:29.293 "lvs/nvme0n1p0" 00:23:29.293 ], 00:23:29.293 "product_name": "Logical Volume", 00:23:29.293 "block_size": 4096, 00:23:29.293 "num_blocks": 26476544, 00:23:29.293 "uuid": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:29.293 "assigned_rate_limits": { 00:23:29.293 "rw_ios_per_sec": 0, 00:23:29.293 "rw_mbytes_per_sec": 0, 00:23:29.293 "r_mbytes_per_sec": 0, 00:23:29.293 "w_mbytes_per_sec": 0 00:23:29.293 }, 00:23:29.293 "claimed": false, 00:23:29.293 "zoned": false, 00:23:29.293 "supported_io_types": { 00:23:29.293 "read": true, 00:23:29.293 "write": true, 00:23:29.293 "unmap": true, 00:23:29.293 "flush": false, 00:23:29.293 "reset": true, 00:23:29.293 "nvme_admin": false, 00:23:29.293 "nvme_io": false, 00:23:29.293 "nvme_io_md": false, 00:23:29.293 "write_zeroes": true, 00:23:29.293 "zcopy": false, 00:23:29.293 "get_zone_info": false, 00:23:29.293 "zone_management": false, 00:23:29.293 "zone_append": false, 00:23:29.293 "compare": false, 00:23:29.293 "compare_and_write": false, 00:23:29.293 "abort": false, 00:23:29.293 "seek_hole": true, 00:23:29.293 "seek_data": true, 00:23:29.293 "copy": false, 00:23:29.293 "nvme_iov_md": false 00:23:29.293 }, 00:23:29.293 "driver_specific": { 00:23:29.293 "lvol": { 00:23:29.293 "lvol_store_uuid": "50e26e82-a171-4d19-82ed-098b8e2f3c0c", 00:23:29.293 "base_bdev": "nvme0n1", 00:23:29.293 "thin_provision": true, 00:23:29.293 "num_allocated_clusters": 0, 00:23:29.293 "snapshot": false, 00:23:29.293 "clone": false, 00:23:29.293 "esnap_clone": false 00:23:29.293 } 00:23:29.293 } 00:23:29.293 } 00:23:29.293 ]' 00:23:29.293 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.293 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.293 14:28:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.293 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:29.293 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:29.293 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:29.293 14:28:54 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:29.293 14:28:54 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:29.293 14:28:54 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:29.552 14:28:54 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:29.552 14:28:54 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:29.552 14:28:54 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.552 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.552 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:29.552 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:29.552 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:29.552 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:29.811 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.811 { 00:23:29.811 "name": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:29.811 "aliases": [ 00:23:29.812 "lvs/nvme0n1p0" 00:23:29.812 ], 00:23:29.812 "product_name": "Logical Volume", 00:23:29.812 "block_size": 4096, 00:23:29.812 "num_blocks": 26476544, 00:23:29.812 "uuid": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:29.812 "assigned_rate_limits": { 00:23:29.812 "rw_ios_per_sec": 0, 00:23:29.812 "rw_mbytes_per_sec": 0, 00:23:29.812 "r_mbytes_per_sec": 0, 00:23:29.812 "w_mbytes_per_sec": 0 00:23:29.812 }, 00:23:29.812 "claimed": false, 00:23:29.812 "zoned": false, 00:23:29.812 "supported_io_types": { 00:23:29.812 "read": true, 00:23:29.812 "write": true, 00:23:29.812 "unmap": true, 00:23:29.812 "flush": false, 00:23:29.812 "reset": true, 00:23:29.812 "nvme_admin": false, 00:23:29.812 "nvme_io": false, 00:23:29.812 "nvme_io_md": false, 00:23:29.812 "write_zeroes": true, 00:23:29.812 "zcopy": false, 00:23:29.812 "get_zone_info": false, 00:23:29.812 "zone_management": false, 00:23:29.812 "zone_append": false, 00:23:29.812 "compare": false, 00:23:29.812 "compare_and_write": false, 00:23:29.812 "abort": false, 00:23:29.812 "seek_hole": true, 00:23:29.812 "seek_data": true, 00:23:29.812 "copy": false, 00:23:29.812 "nvme_iov_md": false 00:23:29.812 }, 00:23:29.812 "driver_specific": { 00:23:29.812 "lvol": { 00:23:29.812 "lvol_store_uuid": "50e26e82-a171-4d19-82ed-098b8e2f3c0c", 00:23:29.812 "base_bdev": "nvme0n1", 00:23:29.812 "thin_provision": true, 00:23:29.812 "num_allocated_clusters": 0, 00:23:29.812 "snapshot": false, 00:23:29.812 "clone": false, 00:23:29.812 "esnap_clone": false 00:23:29.812 } 00:23:29.812 } 00:23:29.812 } 00:23:29.812 ]' 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:29.812 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:29.812 14:28:54 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:29.812 14:28:54 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:30.071 14:28:54 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:30.071 14:28:54 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:30.071 14:28:54 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:30.071 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:30.071 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:30.071 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:30.071 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:30.071 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4a40ef9-032c-44a0-a71d-753ba9f31ce4 00:23:30.330 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:30.330 { 00:23:30.330 "name": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:30.330 "aliases": [ 00:23:30.330 "lvs/nvme0n1p0" 00:23:30.330 ], 00:23:30.330 "product_name": "Logical Volume", 00:23:30.330 "block_size": 4096, 00:23:30.330 "num_blocks": 26476544, 00:23:30.330 "uuid": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:30.330 "assigned_rate_limits": { 00:23:30.330 "rw_ios_per_sec": 0, 00:23:30.330 "rw_mbytes_per_sec": 0, 00:23:30.330 "r_mbytes_per_sec": 0, 00:23:30.330 "w_mbytes_per_sec": 0 00:23:30.330 }, 00:23:30.330 "claimed": false, 00:23:30.330 "zoned": false, 00:23:30.330 "supported_io_types": { 00:23:30.330 "read": true, 00:23:30.330 "write": true, 00:23:30.330 "unmap": true, 00:23:30.330 "flush": false, 00:23:30.330 "reset": true, 00:23:30.330 "nvme_admin": false, 00:23:30.330 "nvme_io": false, 00:23:30.330 "nvme_io_md": false, 00:23:30.330 "write_zeroes": true, 00:23:30.330 "zcopy": false, 00:23:30.330 "get_zone_info": false, 00:23:30.330 "zone_management": false, 00:23:30.330 "zone_append": false, 00:23:30.330 "compare": false, 00:23:30.330 "compare_and_write": false, 00:23:30.330 "abort": false, 00:23:30.330 "seek_hole": true, 00:23:30.330 "seek_data": true, 00:23:30.330 "copy": false, 00:23:30.330 "nvme_iov_md": false 00:23:30.330 }, 00:23:30.330 "driver_specific": { 00:23:30.330 "lvol": { 00:23:30.330 "lvol_store_uuid": "50e26e82-a171-4d19-82ed-098b8e2f3c0c", 00:23:30.330 "base_bdev": "nvme0n1", 00:23:30.330 "thin_provision": true, 00:23:30.330 "num_allocated_clusters": 0, 00:23:30.330 "snapshot": false, 00:23:30.330 "clone": false, 00:23:30.330 "esnap_clone": false 00:23:30.330 } 00:23:30.330 } 00:23:30.330 } 00:23:30.330 ]' 00:23:30.330 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:30.330 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:30.330 14:28:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:30.330 14:28:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:30.330 14:28:55 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:30.330 14:28:55 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:30.330 14:28:55 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:30.330 14:28:55 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c4a40ef9-032c-44a0-a71d-753ba9f31ce4 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:30.596 [2024-12-10 14:28:55.229437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.596 [2024-12-10 14:28:55.229500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:30.596 [2024-12-10 14:28:55.229524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:30.596 [2024-12-10 14:28:55.229535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.596 [2024-12-10 14:28:55.233163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.596 [2024-12-10 14:28:55.233205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.596 [2024-12-10 14:28:55.233221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:23:30.597 [2024-12-10 14:28:55.233233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.597 [2024-12-10 14:28:55.233378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:30.597 [2024-12-10 14:28:55.234441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:30.597 [2024-12-10 14:28:55.234481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.597 [2024-12-10 14:28:55.234494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.597 [2024-12-10 14:28:55.234508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:23:30.597 [2024-12-10 14:28:55.234519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.597 [2024-12-10 14:28:55.234642] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1fe60d2-6936-421b-b05c-2884051c2feb 00:23:30.597 [2024-12-10 14:28:55.237073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.597 [2024-12-10 14:28:55.237111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:30.597 [2024-12-10 14:28:55.237124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:30.597 [2024-12-10 14:28:55.237137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.597 [2024-12-10 14:28:55.251185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.597 [2024-12-10 14:28:55.251223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.597 [2024-12-10 14:28:55.251241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.972 ms 00:23:30.597 [2024-12-10 14:28:55.251255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.597 [2024-12-10 14:28:55.251461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.597 [2024-12-10 14:28:55.251480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.598 [2024-12-10 14:28:55.251492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:30.598 [2024-12-10 14:28:55.251512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.251561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.598 [2024-12-10 14:28:55.251576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:30.598 [2024-12-10 14:28:55.251587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:30.598 [2024-12-10 14:28:55.251604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.251645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:30.598 [2024-12-10 14:28:55.257867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.598 [2024-12-10 14:28:55.258010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.598 [2024-12-10 14:28:55.258040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:23:30.598 [2024-12-10 14:28:55.258051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.258135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.598 [2024-12-10 14:28:55.258165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:30.598 [2024-12-10 14:28:55.258181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:30.598 [2024-12-10 14:28:55.258192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.258232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:30.598 [2024-12-10 14:28:55.258377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:30.598 [2024-12-10 14:28:55.258399] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:30.598 [2024-12-10 14:28:55.258414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:30.598 [2024-12-10 14:28:55.258431] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:30.598 [2024-12-10 14:28:55.258445] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:30.598 [2024-12-10 14:28:55.258459] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:30.598 [2024-12-10 14:28:55.258470] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:30.598 [2024-12-10 14:28:55.258485] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:30.598 [2024-12-10 14:28:55.258499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:30.598 [2024-12-10 14:28:55.258513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.598 [2024-12-10 14:28:55.258524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:30.598 [2024-12-10 14:28:55.258538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:30.598 [2024-12-10 14:28:55.258549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.258647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.598 [2024-12-10 14:28:55.258658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:30.598 [2024-12-10 14:28:55.258687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:30.598 [2024-12-10 14:28:55.258698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.598 [2024-12-10 14:28:55.258827] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:30.598 [2024-12-10 14:28:55.258841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:30.598 [2024-12-10 14:28:55.258855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.598 [2024-12-10 14:28:55.258866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.598 [2024-12-10 14:28:55.258880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:30.598 [2024-12-10 14:28:55.258890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:30.598 [2024-12-10 14:28:55.258903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:30.598 [2024-12-10 14:28:55.258913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:30.598 [2024-12-10 14:28:55.258926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:30.598 [2024-12-10 14:28:55.258935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.598 [2024-12-10 14:28:55.258948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:30.598 [2024-12-10 14:28:55.258957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:30.598 [2024-12-10 14:28:55.258972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.598 [2024-12-10 14:28:55.258983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:30.598 [2024-12-10 14:28:55.258996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:30.598 [2024-12-10 14:28:55.259005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.598 [2024-12-10 14:28:55.259022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:30.598 [2024-12-10 14:28:55.259032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:30.598 [2024-12-10 14:28:55.259044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:30.599 [2024-12-10 14:28:55.259068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:30.599 [2024-12-10 14:28:55.259100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:30.599 [2024-12-10 14:28:55.259134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:30.599 [2024-12-10 14:28:55.259166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:30.599 [2024-12-10 14:28:55.259204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.599 [2024-12-10 14:28:55.259225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:30.599 [2024-12-10 14:28:55.259234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:30.599 [2024-12-10 14:28:55.259246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.599 [2024-12-10 14:28:55.259256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:30.599 [2024-12-10 14:28:55.259270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:30.599 [2024-12-10 14:28:55.259279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:30.599 [2024-12-10 14:28:55.259301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:30.599 [2024-12-10 14:28:55.259314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259323] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:30.599 [2024-12-10 14:28:55.259336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:30.599 [2024-12-10 14:28:55.259348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.599 [2024-12-10 14:28:55.259372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:30.599 [2024-12-10 14:28:55.259389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:30.599 [2024-12-10 14:28:55.259399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:30.599 [2024-12-10 14:28:55.259412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:30.599 [2024-12-10 14:28:55.259421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:30.599 [2024-12-10 14:28:55.259434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:30.599 [2024-12-10 14:28:55.259446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:30.599 [2024-12-10 14:28:55.259462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:30.599 [2024-12-10 14:28:55.259491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:30.599 [2024-12-10 14:28:55.259501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:30.599 [2024-12-10 14:28:55.259515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:30.599 [2024-12-10 14:28:55.259526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:30.599 [2024-12-10 14:28:55.259539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:30.599 [2024-12-10 14:28:55.259550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:30.599 [2024-12-10 14:28:55.259563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:30.599 [2024-12-10 14:28:55.259574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:30.599 [2024-12-10 14:28:55.259592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:30.599 [2024-12-10 14:28:55.259650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:30.599 [2024-12-10 14:28:55.259681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:30.599 [2024-12-10 14:28:55.259706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:30.599 [2024-12-10 14:28:55.259717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:30.599 [2024-12-10 14:28:55.259731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:30.599 [2024-12-10 14:28:55.259742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.599 [2024-12-10 14:28:55.259756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:30.599 [2024-12-10 14:28:55.259768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:23:30.599 [2024-12-10 14:28:55.259781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.599 [2024-12-10 14:28:55.259877] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:30.599 [2024-12-10 14:28:55.259898] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:34.794 [2024-12-10 14:28:58.994902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.794 [2024-12-10 14:28:58.994958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:34.794 [2024-12-10 14:28:58.994975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3741.088 ms 00:23:34.794 [2024-12-10 14:28:58.994988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.794 [2024-12-10 14:28:59.031880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.794 [2024-12-10 14:28:59.031932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.794 [2024-12-10 14:28:59.031948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.619 ms 00:23:34.794 [2024-12-10 14:28:59.031962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.794 [2024-12-10 14:28:59.032106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.794 [2024-12-10 14:28:59.032123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.794 [2024-12-10 14:28:59.032153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:34.794 [2024-12-10 14:28:59.032169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.794 [2024-12-10 14:28:59.090804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.794 [2024-12-10 14:28:59.090853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.794 [2024-12-10 14:28:59.090868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.666 ms 00:23:34.794 [2024-12-10 14:28:59.090882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.794 [2024-12-10 14:28:59.091007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.794 [2024-12-10 14:28:59.091023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.795 [2024-12-10 14:28:59.091035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:34.795 [2024-12-10 14:28:59.091059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.091512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.091532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.795 [2024-12-10 14:28:59.091542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:23:34.795 [2024-12-10 14:28:59.091555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.091701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.091718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.795 [2024-12-10 14:28:59.091761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:34.795 [2024-12-10 14:28:59.091779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.112483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.112524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.795 [2024-12-10 14:28:59.112538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.678 ms 00:23:34.795 [2024-12-10 14:28:59.112551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.124303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:34.795 [2024-12-10 14:28:59.140774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.140815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.795 [2024-12-10 14:28:59.140833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.082 ms 00:23:34.795 [2024-12-10 14:28:59.140843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.241695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.241736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:34.795 [2024-12-10 14:28:59.241753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.894 ms 00:23:34.795 [2024-12-10 14:28:59.241764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.241992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.242007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.795 [2024-12-10 14:28:59.242023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:34.795 [2024-12-10 14:28:59.242033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.276511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.276785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:34.795 [2024-12-10 14:28:59.276814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.475 ms 00:23:34.795 [2024-12-10 14:28:59.276826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.310379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.310415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:34.795 [2024-12-10 14:28:59.310432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.488 ms 00:23:34.795 [2024-12-10 14:28:59.310443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.311280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.311307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.795 [2024-12-10 14:28:59.311322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:23:34.795 [2024-12-10 14:28:59.311333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.409613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.409652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:34.795 [2024-12-10 14:28:59.409678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.384 ms 00:23:34.795 [2024-12-10 14:28:59.409689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.444690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.444876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:34.795 [2024-12-10 14:28:59.444915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.843 ms 00:23:34.795 [2024-12-10 14:28:59.444926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.479217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.479252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:34.795 [2024-12-10 14:28:59.479268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.235 ms 00:23:34.795 [2024-12-10 14:28:59.479277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.513676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.513726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.795 [2024-12-10 14:28:59.513742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.336 ms 00:23:34.795 [2024-12-10 14:28:59.513752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.513867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.513883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.795 [2024-12-10 14:28:59.513899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.795 [2024-12-10 14:28:59.513909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.514006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.795 [2024-12-10 14:28:59.514017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.795 [2024-12-10 14:28:59.514029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:34.795 [2024-12-10 14:28:59.514039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.795 [2024-12-10 14:28:59.515181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.795 [2024-12-10 14:28:59.519496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4292.265 ms, result 0 00:23:34.795 [2024-12-10 14:28:59.520597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:34.795 { 00:23:34.795 "name": "ftl0", 00:23:34.795 "uuid": "d1fe60d2-6936-421b-b05c-2884051c2feb" 00:23:34.795 } 00:23:34.795 14:28:59 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:34.795 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:35.055 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:35.314 [ 00:23:35.314 { 00:23:35.314 "name": "ftl0", 00:23:35.314 "aliases": [ 00:23:35.314 "d1fe60d2-6936-421b-b05c-2884051c2feb" 00:23:35.314 ], 00:23:35.314 "product_name": "FTL disk", 00:23:35.314 "block_size": 4096, 00:23:35.314 "num_blocks": 23592960, 00:23:35.314 "uuid": "d1fe60d2-6936-421b-b05c-2884051c2feb", 00:23:35.314 "assigned_rate_limits": { 00:23:35.314 "rw_ios_per_sec": 0, 00:23:35.314 "rw_mbytes_per_sec": 0, 00:23:35.314 "r_mbytes_per_sec": 0, 00:23:35.314 "w_mbytes_per_sec": 0 00:23:35.314 }, 00:23:35.314 "claimed": false, 00:23:35.314 "zoned": false, 00:23:35.314 "supported_io_types": { 00:23:35.314 "read": true, 00:23:35.314 "write": true, 00:23:35.314 "unmap": true, 00:23:35.314 "flush": true, 00:23:35.314 "reset": false, 00:23:35.314 "nvme_admin": false, 00:23:35.314 "nvme_io": false, 00:23:35.314 "nvme_io_md": false, 00:23:35.314 "write_zeroes": true, 00:23:35.314 "zcopy": false, 00:23:35.314 "get_zone_info": false, 00:23:35.314 "zone_management": false, 00:23:35.314 "zone_append": false, 00:23:35.314 "compare": false, 00:23:35.314 "compare_and_write": false, 00:23:35.314 "abort": false, 00:23:35.314 "seek_hole": false, 00:23:35.314 "seek_data": false, 00:23:35.314 "copy": false, 00:23:35.314 "nvme_iov_md": false 00:23:35.314 }, 00:23:35.314 "driver_specific": { 00:23:35.314 "ftl": { 00:23:35.314 "base_bdev": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:35.314 "cache": "nvc0n1p0" 00:23:35.314 } 00:23:35.314 } 00:23:35.314 } 00:23:35.314 ] 00:23:35.314 14:28:59 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:35.314 14:28:59 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:35.314 14:28:59 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:35.573 14:29:00 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:35.573 14:29:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:35.573 14:29:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:35.573 { 00:23:35.573 "name": "ftl0", 00:23:35.573 "aliases": [ 00:23:35.573 "d1fe60d2-6936-421b-b05c-2884051c2feb" 00:23:35.573 ], 00:23:35.573 "product_name": "FTL disk", 00:23:35.573 "block_size": 4096, 00:23:35.573 "num_blocks": 23592960, 00:23:35.573 "uuid": "d1fe60d2-6936-421b-b05c-2884051c2feb", 00:23:35.573 "assigned_rate_limits": { 00:23:35.573 "rw_ios_per_sec": 0, 00:23:35.573 "rw_mbytes_per_sec": 0, 00:23:35.573 "r_mbytes_per_sec": 0, 00:23:35.573 "w_mbytes_per_sec": 0 00:23:35.573 }, 00:23:35.573 "claimed": false, 00:23:35.573 "zoned": false, 00:23:35.573 "supported_io_types": { 00:23:35.573 "read": true, 00:23:35.573 "write": true, 00:23:35.573 "unmap": true, 00:23:35.573 "flush": true, 00:23:35.573 "reset": false, 00:23:35.573 "nvme_admin": false, 00:23:35.573 "nvme_io": false, 00:23:35.573 "nvme_io_md": false, 00:23:35.573 "write_zeroes": true, 00:23:35.573 "zcopy": false, 00:23:35.573 "get_zone_info": false, 00:23:35.573 "zone_management": false, 00:23:35.573 "zone_append": false, 00:23:35.573 "compare": false, 00:23:35.573 "compare_and_write": false, 00:23:35.573 "abort": false, 00:23:35.573 "seek_hole": false, 00:23:35.573 "seek_data": false, 00:23:35.573 "copy": false, 00:23:35.573 "nvme_iov_md": false 00:23:35.573 }, 00:23:35.573 "driver_specific": { 00:23:35.573 "ftl": { 00:23:35.573 "base_bdev": "c4a40ef9-032c-44a0-a71d-753ba9f31ce4", 00:23:35.573 "cache": "nvc0n1p0" 00:23:35.573 } 00:23:35.573 } 00:23:35.573 } 00:23:35.573 ]' 00:23:35.573 14:29:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:35.833 14:29:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:35.833 14:29:00 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:35.833 [2024-12-10 14:29:00.601820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.602073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:35.833 [2024-12-10 14:29:00.602105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.833 [2024-12-10 14:29:00.602122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.833 [2024-12-10 14:29:00.602192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:35.833 [2024-12-10 14:29:00.606097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.606130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:35.833 [2024-12-10 14:29:00.606148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:23:35.833 [2024-12-10 14:29:00.606160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.833 [2024-12-10 14:29:00.607146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.607173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:35.833 [2024-12-10 14:29:00.607187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:23:35.833 [2024-12-10 14:29:00.607198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.833 [2024-12-10 14:29:00.609837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.609864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:35.833 [2024-12-10 14:29:00.609878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:23:35.833 [2024-12-10 14:29:00.609888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.833 [2024-12-10 14:29:00.615271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.615302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:35.833 [2024-12-10 14:29:00.615316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:23:35.833 [2024-12-10 14:29:00.615326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.833 [2024-12-10 14:29:00.649849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.833 [2024-12-10 14:29:00.649886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:35.833 [2024-12-10 14:29:00.649905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.445 ms 00:23:35.833 [2024-12-10 14:29:00.649915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.100 [2024-12-10 14:29:00.671942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.100 [2024-12-10 14:29:00.672160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.100 [2024-12-10 14:29:00.672189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.957 ms 00:23:36.100 [2024-12-10 14:29:00.672204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.100 [2024-12-10 14:29:00.672519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.100 [2024-12-10 14:29:00.672535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.100 [2024-12-10 14:29:00.672549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:23:36.100 [2024-12-10 14:29:00.672560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.707075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.101 [2024-12-10 14:29:00.707240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:36.101 [2024-12-10 14:29:00.707278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.524 ms 00:23:36.101 [2024-12-10 14:29:00.707289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.742321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.101 [2024-12-10 14:29:00.742358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:36.101 [2024-12-10 14:29:00.742376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.921 ms 00:23:36.101 [2024-12-10 14:29:00.742385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.776008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.101 [2024-12-10 14:29:00.776053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.101 [2024-12-10 14:29:00.776069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.574 ms 00:23:36.101 [2024-12-10 14:29:00.776079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.810277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.101 [2024-12-10 14:29:00.810313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.101 [2024-12-10 14:29:00.810329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.053 ms 00:23:36.101 [2024-12-10 14:29:00.810338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.810440] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.101 [2024-12-10 14:29:00.810457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.810990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.101 [2024-12-10 14:29:00.811759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.101 [2024-12-10 14:29:00.811775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:23:36.101 [2024-12-10 14:29:00.811785] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.101 [2024-12-10 14:29:00.811798] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.101 [2024-12-10 14:29:00.811808] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.101 [2024-12-10 14:29:00.811823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.101 [2024-12-10 14:29:00.811833] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.101 [2024-12-10 14:29:00.811847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.101 [2024-12-10 14:29:00.811857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.101 [2024-12-10 14:29:00.811869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.101 [2024-12-10 14:29:00.811878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.101 [2024-12-10 14:29:00.811900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.101 [2024-12-10 14:29:00.811910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.101 [2024-12-10 14:29:00.811924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:23:36.101 [2024-12-10 14:29:00.811934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.101 [2024-12-10 14:29:00.830804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.102 [2024-12-10 14:29:00.830842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.102 [2024-12-10 14:29:00.830860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:23:36.102 [2024-12-10 14:29:00.830871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.102 [2024-12-10 14:29:00.831516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.102 [2024-12-10 14:29:00.831535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.102 [2024-12-10 14:29:00.831549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:23:36.102 [2024-12-10 14:29:00.831558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.102 [2024-12-10 14:29:00.898218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.102 [2024-12-10 14:29:00.898253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.102 [2024-12-10 14:29:00.898268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.102 [2024-12-10 14:29:00.898278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.102 [2024-12-10 14:29:00.898398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.102 [2024-12-10 14:29:00.898410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.102 [2024-12-10 14:29:00.898423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.102 [2024-12-10 14:29:00.898432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.102 [2024-12-10 14:29:00.898514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.102 [2024-12-10 14:29:00.898527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.102 [2024-12-10 14:29:00.898545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.102 [2024-12-10 14:29:00.898554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.102 [2024-12-10 14:29:00.898608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.102 [2024-12-10 14:29:00.898619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.102 [2024-12-10 14:29:00.898631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.102 [2024-12-10 14:29:00.898641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.023288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.023347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.361 [2024-12-10 14:29:01.023363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.023373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.118017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.118278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.361 [2024-12-10 14:29:01.118306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.118317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.118455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.118468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.361 [2024-12-10 14:29:01.118485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.118499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.118592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.118605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.361 [2024-12-10 14:29:01.118618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.118627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.118805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.118820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.361 [2024-12-10 14:29:01.118834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.118847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.118923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.118936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.361 [2024-12-10 14:29:01.118949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.118959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.119047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.119060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.361 [2024-12-10 14:29:01.119075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.119085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.119168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.361 [2024-12-10 14:29:01.119181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.361 [2024-12-10 14:29:01.119194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.361 [2024-12-10 14:29:01.119204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-12-10 14:29:01.119465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 518.469 ms, result 0 00:23:36.361 true 00:23:36.361 14:29:01 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79456 00:23:36.361 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79456 ']' 00:23:36.361 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79456 00:23:36.361 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:36.361 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.361 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79456 00:23:36.620 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.620 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.620 killing process with pid 79456 00:23:36.620 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79456' 00:23:36.620 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79456 00:23:36.620 14:29:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79456 00:23:41.894 14:29:06 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:42.462 65536+0 records in 00:23:42.462 65536+0 records out 00:23:42.462 268435456 bytes (268 MB, 256 MiB) copied, 0.9309 s, 288 MB/s 00:23:42.462 14:29:07 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:42.721 [2024-12-10 14:29:07.337957] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:23:42.721 [2024-12-10 14:29:07.338103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79661 ] 00:23:42.721 [2024-12-10 14:29:07.522060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.981 [2024-12-10 14:29:07.651335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.240 [2024-12-10 14:29:08.071207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.240 [2024-12-10 14:29:08.071287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.500 [2024-12-10 14:29:08.238186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.238238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:43.501 [2024-12-10 14:29:08.238255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:43.501 [2024-12-10 14:29:08.238281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.241746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.241783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.501 [2024-12-10 14:29:08.241796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:23:43.501 [2024-12-10 14:29:08.241822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.241926] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:43.501 [2024-12-10 14:29:08.243003] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:43.501 [2024-12-10 14:29:08.243039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.243051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.501 [2024-12-10 14:29:08.243062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:23:43.501 [2024-12-10 14:29:08.243073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.245535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:43.501 [2024-12-10 14:29:08.265410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.265467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:43.501 [2024-12-10 14:29:08.265483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.909 ms 00:23:43.501 [2024-12-10 14:29:08.265493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.265597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.265613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:43.501 [2024-12-10 14:29:08.265624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:43.501 [2024-12-10 14:29:08.265633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.277597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.277626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.501 [2024-12-10 14:29:08.277640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.921 ms 00:23:43.501 [2024-12-10 14:29:08.277667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.277799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.277816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.501 [2024-12-10 14:29:08.277828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:43.501 [2024-12-10 14:29:08.277839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.277872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.277884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:43.501 [2024-12-10 14:29:08.277895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:43.501 [2024-12-10 14:29:08.277905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.277929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:43.501 [2024-12-10 14:29:08.283691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.283722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.501 [2024-12-10 14:29:08.283735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.777 ms 00:23:43.501 [2024-12-10 14:29:08.283762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.283819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.283832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:43.501 [2024-12-10 14:29:08.283854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:43.501 [2024-12-10 14:29:08.283865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.283889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:43.501 [2024-12-10 14:29:08.283915] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:43.501 [2024-12-10 14:29:08.283950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:43.501 [2024-12-10 14:29:08.283969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:43.501 [2024-12-10 14:29:08.284055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:43.501 [2024-12-10 14:29:08.284068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:43.501 [2024-12-10 14:29:08.284081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:43.501 [2024-12-10 14:29:08.284097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284109] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284120] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:43.501 [2024-12-10 14:29:08.284130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:43.501 [2024-12-10 14:29:08.284140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:43.501 [2024-12-10 14:29:08.284150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:43.501 [2024-12-10 14:29:08.284160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.284170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:43.501 [2024-12-10 14:29:08.284180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:23:43.501 [2024-12-10 14:29:08.284190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.284262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.501 [2024-12-10 14:29:08.284277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:43.501 [2024-12-10 14:29:08.284287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:43.501 [2024-12-10 14:29:08.284297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.501 [2024-12-10 14:29:08.284385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:43.501 [2024-12-10 14:29:08.284397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:43.501 [2024-12-10 14:29:08.284408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:43.501 [2024-12-10 14:29:08.284438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:43.501 [2024-12-10 14:29:08.284467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.501 [2024-12-10 14:29:08.284489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:43.501 [2024-12-10 14:29:08.284510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:43.501 [2024-12-10 14:29:08.284519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.501 [2024-12-10 14:29:08.284529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:43.501 [2024-12-10 14:29:08.284538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:43.501 [2024-12-10 14:29:08.284547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:43.501 [2024-12-10 14:29:08.284565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:43.501 [2024-12-10 14:29:08.284592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:43.501 [2024-12-10 14:29:08.284619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:43.501 [2024-12-10 14:29:08.284646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:43.501 [2024-12-10 14:29:08.284671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.501 [2024-12-10 14:29:08.284688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:43.501 [2024-12-10 14:29:08.284713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.501 [2024-12-10 14:29:08.284732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:43.501 [2024-12-10 14:29:08.284741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:43.501 [2024-12-10 14:29:08.284749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.501 [2024-12-10 14:29:08.284758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:43.501 [2024-12-10 14:29:08.284766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:43.501 [2024-12-10 14:29:08.284775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.501 [2024-12-10 14:29:08.284784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:43.501 [2024-12-10 14:29:08.284793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:43.501 [2024-12-10 14:29:08.284804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.502 [2024-12-10 14:29:08.284812] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:43.502 [2024-12-10 14:29:08.284823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:43.502 [2024-12-10 14:29:08.284837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.502 [2024-12-10 14:29:08.284846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.502 [2024-12-10 14:29:08.284856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:43.502 [2024-12-10 14:29:08.284864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:43.502 [2024-12-10 14:29:08.284890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:43.502 [2024-12-10 14:29:08.284900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:43.502 [2024-12-10 14:29:08.284909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:43.502 [2024-12-10 14:29:08.284918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:43.502 [2024-12-10 14:29:08.284930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:43.502 [2024-12-10 14:29:08.284942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.284954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:43.502 [2024-12-10 14:29:08.284965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:43.502 [2024-12-10 14:29:08.284976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:43.502 [2024-12-10 14:29:08.284987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:43.502 [2024-12-10 14:29:08.284998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:43.502 [2024-12-10 14:29:08.285008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:43.502 [2024-12-10 14:29:08.285025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:43.502 [2024-12-10 14:29:08.285036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:43.502 [2024-12-10 14:29:08.285047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:43.502 [2024-12-10 14:29:08.285057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:43.502 [2024-12-10 14:29:08.285109] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:43.502 [2024-12-10 14:29:08.285120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:43.502 [2024-12-10 14:29:08.285141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:43.502 [2024-12-10 14:29:08.285151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:43.502 [2024-12-10 14:29:08.285162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:43.502 [2024-12-10 14:29:08.285173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.502 [2024-12-10 14:29:08.285190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:43.502 [2024-12-10 14:29:08.285201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:23:43.502 [2024-12-10 14:29:08.285211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.761 [2024-12-10 14:29:08.334809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.761 [2024-12-10 14:29:08.334847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.761 [2024-12-10 14:29:08.334861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.615 ms 00:23:43.761 [2024-12-10 14:29:08.334873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.761 [2024-12-10 14:29:08.335017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.761 [2024-12-10 14:29:08.335030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:43.761 [2024-12-10 14:29:08.335041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:43.761 [2024-12-10 14:29:08.335052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.761 [2024-12-10 14:29:08.414837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.761 [2024-12-10 14:29:08.414881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.761 [2024-12-10 14:29:08.414896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.890 ms 00:23:43.761 [2024-12-10 14:29:08.414907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.761 [2024-12-10 14:29:08.415005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.415019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.762 [2024-12-10 14:29:08.415031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:43.762 [2024-12-10 14:29:08.415041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.415827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.415843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.762 [2024-12-10 14:29:08.415862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:23:43.762 [2024-12-10 14:29:08.415872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.416000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.416014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.762 [2024-12-10 14:29:08.416025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:43.762 [2024-12-10 14:29:08.416035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.440344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.440379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.762 [2024-12-10 14:29:08.440394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.323 ms 00:23:43.762 [2024-12-10 14:29:08.440405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.460631] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:43.762 [2024-12-10 14:29:08.460686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:43.762 [2024-12-10 14:29:08.460720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.460733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:43.762 [2024-12-10 14:29:08.460746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.210 ms 00:23:43.762 [2024-12-10 14:29:08.460757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.490665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.490822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:43.762 [2024-12-10 14:29:08.490845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.861 ms 00:23:43.762 [2024-12-10 14:29:08.490858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.509502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.509542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:43.762 [2024-12-10 14:29:08.509556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:23:43.762 [2024-12-10 14:29:08.509567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.527427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.527562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:43.762 [2024-12-10 14:29:08.527583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.805 ms 00:23:43.762 [2024-12-10 14:29:08.527610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.762 [2024-12-10 14:29:08.528425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.762 [2024-12-10 14:29:08.528451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:43.762 [2024-12-10 14:29:08.528464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:23:43.762 [2024-12-10 14:29:08.528475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.625122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.625326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:44.022 [2024-12-10 14:29:08.625356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.771 ms 00:23:44.022 [2024-12-10 14:29:08.625369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.636050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:44.022 [2024-12-10 14:29:08.661681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.661734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.022 [2024-12-10 14:29:08.661752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.237 ms 00:23:44.022 [2024-12-10 14:29:08.661763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.661939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.661953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:44.022 [2024-12-10 14:29:08.661965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:44.022 [2024-12-10 14:29:08.661975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.662045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.662057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.022 [2024-12-10 14:29:08.662068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:44.022 [2024-12-10 14:29:08.662078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.662120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.662138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.022 [2024-12-10 14:29:08.662149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:44.022 [2024-12-10 14:29:08.662159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.662201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:44.022 [2024-12-10 14:29:08.662214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.662224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:44.022 [2024-12-10 14:29:08.662234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:44.022 [2024-12-10 14:29:08.662244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.698501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.698694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.022 [2024-12-10 14:29:08.698718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.289 ms 00:23:44.022 [2024-12-10 14:29:08.698730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.698854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.022 [2024-12-10 14:29:08.698869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.022 [2024-12-10 14:29:08.698881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:44.022 [2024-12-10 14:29:08.698892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.022 [2024-12-10 14:29:08.700237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.022 [2024-12-10 14:29:08.704322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 462.436 ms, result 0 00:23:44.022 [2024-12-10 14:29:08.705295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:44.022 [2024-12-10 14:29:08.723320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.959  [2024-12-10T14:29:10.730Z] Copying: 22/256 [MB] (22 MBps) [2024-12-10T14:29:12.107Z] Copying: 43/256 [MB] (20 MBps) [2024-12-10T14:29:13.045Z] Copying: 65/256 [MB] (22 MBps) [2024-12-10T14:29:14.027Z] Copying: 88/256 [MB] (22 MBps) [2024-12-10T14:29:14.964Z] Copying: 110/256 [MB] (22 MBps) [2024-12-10T14:29:15.907Z] Copying: 133/256 [MB] (22 MBps) [2024-12-10T14:29:16.844Z] Copying: 155/256 [MB] (22 MBps) [2024-12-10T14:29:17.781Z] Copying: 178/256 [MB] (22 MBps) [2024-12-10T14:29:18.718Z] Copying: 200/256 [MB] (22 MBps) [2024-12-10T14:29:20.097Z] Copying: 223/256 [MB] (22 MBps) [2024-12-10T14:29:20.097Z] Copying: 247/256 [MB] (23 MBps) [2024-12-10T14:29:20.097Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-10 14:29:20.075017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:55.263 [2024-12-10 14:29:20.090618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.263 [2024-12-10 14:29:20.090817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:55.263 [2024-12-10 14:29:20.090845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:55.263 [2024-12-10 14:29:20.090867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.263 [2024-12-10 14:29:20.090901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:55.523 [2024-12-10 14:29:20.095579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.523 [2024-12-10 14:29:20.095612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:55.523 [2024-12-10 14:29:20.095642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:23:55.523 [2024-12-10 14:29:20.095653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.097795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.097942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:55.524 [2024-12-10 14:29:20.097965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:23:55.524 [2024-12-10 14:29:20.097976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.104764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.104935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:55.524 [2024-12-10 14:29:20.104957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.771 ms 00:23:55.524 [2024-12-10 14:29:20.104968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.110493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.110530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:55.524 [2024-12-10 14:29:20.110543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.477 ms 00:23:55.524 [2024-12-10 14:29:20.110570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.145302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.145337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:55.524 [2024-12-10 14:29:20.145351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.711 ms 00:23:55.524 [2024-12-10 14:29:20.145361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.165163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.165206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:55.524 [2024-12-10 14:29:20.165223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.763 ms 00:23:55.524 [2024-12-10 14:29:20.165233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.165395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.165412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:55.524 [2024-12-10 14:29:20.165423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:55.524 [2024-12-10 14:29:20.165444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.200814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.200848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:55.524 [2024-12-10 14:29:20.200861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.402 ms 00:23:55.524 [2024-12-10 14:29:20.200870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.234979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.235014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:55.524 [2024-12-10 14:29:20.235026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.112 ms 00:23:55.524 [2024-12-10 14:29:20.235036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.268572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.268719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:55.524 [2024-12-10 14:29:20.268756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.537 ms 00:23:55.524 [2024-12-10 14:29:20.268767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.303044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.524 [2024-12-10 14:29:20.303080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:55.524 [2024-12-10 14:29:20.303093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.249 ms 00:23:55.524 [2024-12-10 14:29:20.303103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.524 [2024-12-10 14:29:20.303158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:55.524 [2024-12-10 14:29:20.303176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:55.524 [2024-12-10 14:29:20.303834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.303998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:55.525 [2024-12-10 14:29:20.304322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:55.525 [2024-12-10 14:29:20.304333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:23:55.525 [2024-12-10 14:29:20.304344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:55.525 [2024-12-10 14:29:20.304355] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:55.525 [2024-12-10 14:29:20.304365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:55.525 [2024-12-10 14:29:20.304376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:55.525 [2024-12-10 14:29:20.304386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:55.525 [2024-12-10 14:29:20.304397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:55.525 [2024-12-10 14:29:20.304407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:55.525 [2024-12-10 14:29:20.304417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:55.525 [2024-12-10 14:29:20.304426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:55.525 [2024-12-10 14:29:20.304436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.525 [2024-12-10 14:29:20.304452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:55.525 [2024-12-10 14:29:20.304463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:23:55.525 [2024-12-10 14:29:20.304473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.525 [2024-12-10 14:29:20.326008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.525 [2024-12-10 14:29:20.326042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:55.525 [2024-12-10 14:29:20.326056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.549 ms 00:23:55.525 [2024-12-10 14:29:20.326066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.525 [2024-12-10 14:29:20.326715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.525 [2024-12-10 14:29:20.326735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:55.525 [2024-12-10 14:29:20.326747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:23:55.525 [2024-12-10 14:29:20.326757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.785 [2024-12-10 14:29:20.385919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.785 [2024-12-10 14:29:20.385957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.785 [2024-12-10 14:29:20.385972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.785 [2024-12-10 14:29:20.385983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.785 [2024-12-10 14:29:20.386104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.785 [2024-12-10 14:29:20.386117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.785 [2024-12-10 14:29:20.386130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.785 [2024-12-10 14:29:20.386140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.785 [2024-12-10 14:29:20.386194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.785 [2024-12-10 14:29:20.386208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.785 [2024-12-10 14:29:20.386219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.785 [2024-12-10 14:29:20.386230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.785 [2024-12-10 14:29:20.386251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.785 [2024-12-10 14:29:20.386267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.785 [2024-12-10 14:29:20.386278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.785 [2024-12-10 14:29:20.386289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.785 [2024-12-10 14:29:20.521269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.785 [2024-12-10 14:29:20.521331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.785 [2024-12-10 14:29:20.521347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.785 [2024-12-10 14:29:20.521375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.045 [2024-12-10 14:29:20.622210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.045 [2024-12-10 14:29:20.622369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.045 [2024-12-10 14:29:20.622445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.045 [2024-12-10 14:29:20.622618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:56.045 [2024-12-10 14:29:20.622715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.045 [2024-12-10 14:29:20.622804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.622870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.045 [2024-12-10 14:29:20.622883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.045 [2024-12-10 14:29:20.622898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.045 [2024-12-10 14:29:20.622909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.045 [2024-12-10 14:29:20.623102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.327 ms, result 0 00:23:57.425 00:23:57.425 00:23:57.425 14:29:21 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79815 00:23:57.425 14:29:21 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:57.425 14:29:21 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79815 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79815 ']' 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.425 14:29:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:57.425 [2024-12-10 14:29:22.063096] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:23:57.425 [2024-12-10 14:29:22.063254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79815 ] 00:23:57.425 [2024-12-10 14:29:22.249418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.685 [2024-12-10 14:29:22.375623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.624 14:29:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.624 14:29:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:58.624 14:29:23 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:58.884 [2024-12-10 14:29:23.575794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.884 [2024-12-10 14:29:23.575864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.144 [2024-12-10 14:29:23.763660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.144 [2024-12-10 14:29:23.763725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.144 [2024-12-10 14:29:23.763747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:59.144 [2024-12-10 14:29:23.763774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.144 [2024-12-10 14:29:23.768097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.144 [2024-12-10 14:29:23.768135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.145 [2024-12-10 14:29:23.768167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:23:59.145 [2024-12-10 14:29:23.768177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.768307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.145 [2024-12-10 14:29:23.769307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.145 [2024-12-10 14:29:23.769347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.769359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.145 [2024-12-10 14:29:23.769372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:23:59.145 [2024-12-10 14:29:23.769384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.771884] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.145 [2024-12-10 14:29:23.791519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.791567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.145 [2024-12-10 14:29:23.791583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.671 ms 00:23:59.145 [2024-12-10 14:29:23.791615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.791737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.791758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.145 [2024-12-10 14:29:23.791771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:59.145 [2024-12-10 14:29:23.791786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.804077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.804123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.145 [2024-12-10 14:29:23.804137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:23:59.145 [2024-12-10 14:29:23.804169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.804338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.804359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.145 [2024-12-10 14:29:23.804371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:59.145 [2024-12-10 14:29:23.804395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.804426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.804443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.145 [2024-12-10 14:29:23.804454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.145 [2024-12-10 14:29:23.804470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.804497] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:59.145 [2024-12-10 14:29:23.810231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.810264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.145 [2024-12-10 14:29:23.810299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.744 ms 00:23:59.145 [2024-12-10 14:29:23.810310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.810379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.810393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.145 [2024-12-10 14:29:23.810410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.145 [2024-12-10 14:29:23.810426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.810455] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.145 [2024-12-10 14:29:23.810491] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.145 [2024-12-10 14:29:23.810545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.145 [2024-12-10 14:29:23.810566] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:59.145 [2024-12-10 14:29:23.810690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.145 [2024-12-10 14:29:23.810706] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.145 [2024-12-10 14:29:23.810731] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.145 [2024-12-10 14:29:23.810745] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.145 [2024-12-10 14:29:23.810764] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.145 [2024-12-10 14:29:23.810776] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:59.145 [2024-12-10 14:29:23.810791] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.145 [2024-12-10 14:29:23.810801] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.145 [2024-12-10 14:29:23.810823] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.145 [2024-12-10 14:29:23.810835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.810851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.145 [2024-12-10 14:29:23.810862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:23:59.145 [2024-12-10 14:29:23.810878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.810961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.145 [2024-12-10 14:29:23.810982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.145 [2024-12-10 14:29:23.810993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:59.145 [2024-12-10 14:29:23.811009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.145 [2024-12-10 14:29:23.811103] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.145 [2024-12-10 14:29:23.811122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.145 [2024-12-10 14:29:23.811134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.145 [2024-12-10 14:29:23.811179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.145 [2024-12-10 14:29:23.811219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.145 [2024-12-10 14:29:23.811245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.145 [2024-12-10 14:29:23.811260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:59.145 [2024-12-10 14:29:23.811270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.145 [2024-12-10 14:29:23.811286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.145 [2024-12-10 14:29:23.811297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:59.145 [2024-12-10 14:29:23.811313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.145 [2024-12-10 14:29:23.811339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.145 [2024-12-10 14:29:23.811389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.145 [2024-12-10 14:29:23.811434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.145 [2024-12-10 14:29:23.811468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.145 [2024-12-10 14:29:23.811509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.145 [2024-12-10 14:29:23.811551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.145 [2024-12-10 14:29:23.811576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.145 [2024-12-10 14:29:23.811591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:59.145 [2024-12-10 14:29:23.811600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.145 [2024-12-10 14:29:23.811615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.145 [2024-12-10 14:29:23.811625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:59.145 [2024-12-10 14:29:23.811645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.145 [2024-12-10 14:29:23.811685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:59.145 [2024-12-10 14:29:23.811695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.145 [2024-12-10 14:29:23.811728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.145 [2024-12-10 14:29:23.811743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.145 [2024-12-10 14:29:23.811754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.145 [2024-12-10 14:29:23.811769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.145 [2024-12-10 14:29:23.811780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.146 [2024-12-10 14:29:23.811795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.146 [2024-12-10 14:29:23.811805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.146 [2024-12-10 14:29:23.811820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.146 [2024-12-10 14:29:23.811830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.146 [2024-12-10 14:29:23.811846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.146 [2024-12-10 14:29:23.811860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.811884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:59.146 [2024-12-10 14:29:23.811895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:59.146 [2024-12-10 14:29:23.811911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:59.146 [2024-12-10 14:29:23.811922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:59.146 [2024-12-10 14:29:23.811938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:59.146 [2024-12-10 14:29:23.811950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:59.146 [2024-12-10 14:29:23.811965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:59.146 [2024-12-10 14:29:23.811978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:59.146 [2024-12-10 14:29:23.811995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:59.146 [2024-12-10 14:29:23.812005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:59.146 [2024-12-10 14:29:23.812075] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.146 [2024-12-10 14:29:23.812087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.146 [2024-12-10 14:29:23.812121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.146 [2024-12-10 14:29:23.812138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.146 [2024-12-10 14:29:23.812150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.146 [2024-12-10 14:29:23.812168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.812179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.146 [2024-12-10 14:29:23.812195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:23:59.146 [2024-12-10 14:29:23.812211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.864005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.864046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.146 [2024-12-10 14:29:23.864066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.803 ms 00:23:59.146 [2024-12-10 14:29:23.864100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.864260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.864274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.146 [2024-12-10 14:29:23.864292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:59.146 [2024-12-10 14:29:23.864303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.919794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.919845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.146 [2024-12-10 14:29:23.919862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.548 ms 00:23:59.146 [2024-12-10 14:29:23.919872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.919969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.919982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.146 [2024-12-10 14:29:23.919997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:59.146 [2024-12-10 14:29:23.920008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.920816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.920840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.146 [2024-12-10 14:29:23.920855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:23:59.146 [2024-12-10 14:29:23.920866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.921007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.921021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.146 [2024-12-10 14:29:23.921035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:23:59.146 [2024-12-10 14:29:23.921046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.146 [2024-12-10 14:29:23.947986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.146 [2024-12-10 14:29:23.948024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.146 [2024-12-10 14:29:23.948045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.949 ms 00:23:59.146 [2024-12-10 14:29:23.948057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:23.981024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:59.406 [2024-12-10 14:29:23.981065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.406 [2024-12-10 14:29:23.981090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:23.981103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.406 [2024-12-10 14:29:23.981122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.943 ms 00:23:59.406 [2024-12-10 14:29:23.981147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.010885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.010924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.406 [2024-12-10 14:29:24.010961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.689 ms 00:23:59.406 [2024-12-10 14:29:24.010972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.028982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.029016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.406 [2024-12-10 14:29:24.029057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.919 ms 00:23:59.406 [2024-12-10 14:29:24.029067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.046382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.046416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.406 [2024-12-10 14:29:24.046451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.255 ms 00:23:59.406 [2024-12-10 14:29:24.046461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.047336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.047372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.406 [2024-12-10 14:29:24.047391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:23:59.406 [2024-12-10 14:29:24.047403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.139488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.139569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.406 [2024-12-10 14:29:24.139598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.193 ms 00:23:59.406 [2024-12-10 14:29:24.139626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.150173] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:59.406 [2024-12-10 14:29:24.174528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.174613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.406 [2024-12-10 14:29:24.174640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.842 ms 00:23:59.406 [2024-12-10 14:29:24.174656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.174833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.174854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.406 [2024-12-10 14:29:24.174866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.406 [2024-12-10 14:29:24.174882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.174957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.174991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.406 [2024-12-10 14:29:24.175003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:59.406 [2024-12-10 14:29:24.175020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.175049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.175063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.406 [2024-12-10 14:29:24.175075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.406 [2024-12-10 14:29:24.175089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.175135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.406 [2024-12-10 14:29:24.175155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.175170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.406 [2024-12-10 14:29:24.175184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:59.406 [2024-12-10 14:29:24.175194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.406 [2024-12-10 14:29:24.212125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.406 [2024-12-10 14:29:24.212166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.406 [2024-12-10 14:29:24.212187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.948 ms 00:23:59.406 [2024-12-10 14:29:24.212198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.407 [2024-12-10 14:29:24.212344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.407 [2024-12-10 14:29:24.212359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.407 [2024-12-10 14:29:24.212377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:59.407 [2024-12-10 14:29:24.212393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.407 [2024-12-10 14:29:24.213875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.407 [2024-12-10 14:29:24.218242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.494 ms, result 0 00:23:59.407 [2024-12-10 14:29:24.219390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:59.666 Some configs were skipped because the RPC state that can call them passed over. 00:23:59.666 14:29:24 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:59.666 [2024-12-10 14:29:24.446222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.666 [2024-12-10 14:29:24.446290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:59.666 [2024-12-10 14:29:24.446309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:23:59.666 [2024-12-10 14:29:24.446327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.666 [2024-12-10 14:29:24.446371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.865 ms, result 0 00:23:59.666 true 00:23:59.666 14:29:24 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:59.926 [2024-12-10 14:29:24.657893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.926 [2024-12-10 14:29:24.657947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:59.926 [2024-12-10 14:29:24.657973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:23:59.926 [2024-12-10 14:29:24.657985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.926 [2024-12-10 14:29:24.658038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.629 ms, result 0 00:23:59.926 true 00:23:59.926 14:29:24 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79815 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79815 ']' 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79815 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79815 00:23:59.926 killing process with pid 79815 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79815' 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79815 00:23:59.926 14:29:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79815 00:24:01.306 [2024-12-10 14:29:25.914189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.306 [2024-12-10 14:29:25.914292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:01.306 [2024-12-10 14:29:25.914312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.307 [2024-12-10 14:29:25.914325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.914353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:01.307 [2024-12-10 14:29:25.919037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.919080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:01.307 [2024-12-10 14:29:25.919117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:24:01.307 [2024-12-10 14:29:25.919127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.919430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.919446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:01.307 [2024-12-10 14:29:25.919460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:24:01.307 [2024-12-10 14:29:25.919470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.922931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.922971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:01.307 [2024-12-10 14:29:25.922991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:24:01.307 [2024-12-10 14:29:25.923002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.928470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.928507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:01.307 [2024-12-10 14:29:25.928525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:24:01.307 [2024-12-10 14:29:25.928535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.942888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.942942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:01.307 [2024-12-10 14:29:25.942979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.290 ms 00:24:01.307 [2024-12-10 14:29:25.942989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.953476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.953514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:01.307 [2024-12-10 14:29:25.953546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.429 ms 00:24:01.307 [2024-12-10 14:29:25.953557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.953726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.953741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:01.307 [2024-12-10 14:29:25.953754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:01.307 [2024-12-10 14:29:25.953764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.969043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.969076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:01.307 [2024-12-10 14:29:25.969092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.279 ms 00:24:01.307 [2024-12-10 14:29:25.969101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.983707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.983738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:01.307 [2024-12-10 14:29:25.983778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.557 ms 00:24:01.307 [2024-12-10 14:29:25.983787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:25.997434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:25.997473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.307 [2024-12-10 14:29:25.997490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.612 ms 00:24:01.307 [2024-12-10 14:29:25.997498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:26.011702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.307 [2024-12-10 14:29:26.011734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.307 [2024-12-10 14:29:26.011766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.142 ms 00:24:01.307 [2024-12-10 14:29:26.011775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.307 [2024-12-10 14:29:26.011828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.307 [2024-12-10 14:29:26.011846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.011989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.307 [2024-12-10 14:29:26.012524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.012997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.308 [2024-12-10 14:29:26.013166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.308 [2024-12-10 14:29:26.013189] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:24:01.308 [2024-12-10 14:29:26.013205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:01.308 [2024-12-10 14:29:26.013218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:01.308 [2024-12-10 14:29:26.013228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:01.308 [2024-12-10 14:29:26.013242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:01.308 [2024-12-10 14:29:26.013253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.308 [2024-12-10 14:29:26.013267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.308 [2024-12-10 14:29:26.013277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.308 [2024-12-10 14:29:26.013290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.308 [2024-12-10 14:29:26.013299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.308 [2024-12-10 14:29:26.013312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.308 [2024-12-10 14:29:26.013323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.308 [2024-12-10 14:29:26.013337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:24:01.308 [2024-12-10 14:29:26.013347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.033487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.308 [2024-12-10 14:29:26.033520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.308 [2024-12-10 14:29:26.033540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.142 ms 00:24:01.308 [2024-12-10 14:29:26.033550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.034237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.308 [2024-12-10 14:29:26.034263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.308 [2024-12-10 14:29:26.034282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:24:01.308 [2024-12-10 14:29:26.034292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.104355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.308 [2024-12-10 14:29:26.104395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.308 [2024-12-10 14:29:26.104412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.308 [2024-12-10 14:29:26.104423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.104530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.308 [2024-12-10 14:29:26.104543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.308 [2024-12-10 14:29:26.104562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.308 [2024-12-10 14:29:26.104572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.104631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.308 [2024-12-10 14:29:26.104644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.308 [2024-12-10 14:29:26.104661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.308 [2024-12-10 14:29:26.104671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.308 [2024-12-10 14:29:26.104705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.308 [2024-12-10 14:29:26.104716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.308 [2024-12-10 14:29:26.104731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.308 [2024-12-10 14:29:26.104744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.234725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.234779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.568 [2024-12-10 14:29:26.234800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.234813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.568 [2024-12-10 14:29:26.339207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.568 [2024-12-10 14:29:26.339391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.568 [2024-12-10 14:29:26.339465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.568 [2024-12-10 14:29:26.339652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.568 [2024-12-10 14:29:26.339755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.568 [2024-12-10 14:29:26.339834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.568 [2024-12-10 14:29:26.339852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.568 [2024-12-10 14:29:26.339862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.568 [2024-12-10 14:29:26.339918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.569 [2024-12-10 14:29:26.339930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.569 [2024-12-10 14:29:26.339944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.569 [2024-12-10 14:29:26.339955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.569 [2024-12-10 14:29:26.340129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.595 ms, result 0 00:24:02.948 14:29:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:02.948 14:29:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:02.948 [2024-12-10 14:29:27.516198] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:24:02.948 [2024-12-10 14:29:27.516347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 00:24:02.948 [2024-12-10 14:29:27.701856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.208 [2024-12-10 14:29:27.835410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.467 [2024-12-10 14:29:28.252267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:03.467 [2024-12-10 14:29:28.252358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:03.728 [2024-12-10 14:29:28.418398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.418474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:03.728 [2024-12-10 14:29:28.418491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:03.728 [2024-12-10 14:29:28.418502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.422081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.422123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.728 [2024-12-10 14:29:28.422137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:24:03.728 [2024-12-10 14:29:28.422148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.422256] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:03.728 [2024-12-10 14:29:28.423290] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:03.728 [2024-12-10 14:29:28.423327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.423339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.728 [2024-12-10 14:29:28.423351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:24:03.728 [2024-12-10 14:29:28.423362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.425832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:03.728 [2024-12-10 14:29:28.445720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.445771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:03.728 [2024-12-10 14:29:28.445803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.922 ms 00:24:03.728 [2024-12-10 14:29:28.445814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.445923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.445938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:03.728 [2024-12-10 14:29:28.445949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:03.728 [2024-12-10 14:29:28.445960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.457726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.457756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.728 [2024-12-10 14:29:28.457785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.741 ms 00:24:03.728 [2024-12-10 14:29:28.457796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.457919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.457935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.728 [2024-12-10 14:29:28.457946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:03.728 [2024-12-10 14:29:28.457957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.457991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.458002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:03.728 [2024-12-10 14:29:28.458012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:03.728 [2024-12-10 14:29:28.458023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.458048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:03.728 [2024-12-10 14:29:28.463689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.463720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.728 [2024-12-10 14:29:28.463749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:24:03.728 [2024-12-10 14:29:28.463760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.728 [2024-12-10 14:29:28.463817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.728 [2024-12-10 14:29:28.463830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:03.728 [2024-12-10 14:29:28.463841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:03.729 [2024-12-10 14:29:28.463851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 14:29:28.463878] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:03.729 [2024-12-10 14:29:28.463905] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:03.729 [2024-12-10 14:29:28.463942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:03.729 [2024-12-10 14:29:28.463962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:03.729 [2024-12-10 14:29:28.464053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:03.729 [2024-12-10 14:29:28.464067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:03.729 [2024-12-10 14:29:28.464081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:03.729 [2024-12-10 14:29:28.464114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:03.729 [2024-12-10 14:29:28.464150] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:03.729 [2024-12-10 14:29:28.464161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:03.729 [2024-12-10 14:29:28.464171] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:03.729 [2024-12-10 14:29:28.464182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.729 [2024-12-10 14:29:28.464192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:03.729 [2024-12-10 14:29:28.464203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:24:03.729 [2024-12-10 14:29:28.464213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 14:29:28.464290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.729 [2024-12-10 14:29:28.464306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:03.729 [2024-12-10 14:29:28.464318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:03.729 [2024-12-10 14:29:28.464328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.729 [2024-12-10 14:29:28.464423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:03.729 [2024-12-10 14:29:28.464450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:03.729 [2024-12-10 14:29:28.464462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:03.729 [2024-12-10 14:29:28.464494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:03.729 [2024-12-10 14:29:28.464525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.729 [2024-12-10 14:29:28.464544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:03.729 [2024-12-10 14:29:28.464570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:03.729 [2024-12-10 14:29:28.464581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.729 [2024-12-10 14:29:28.464591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:03.729 [2024-12-10 14:29:28.464601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:03.729 [2024-12-10 14:29:28.464610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:03.729 [2024-12-10 14:29:28.464629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:03.729 [2024-12-10 14:29:28.464658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:03.729 [2024-12-10 14:29:28.464698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:03.729 [2024-12-10 14:29:28.464727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:03.729 [2024-12-10 14:29:28.464757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:03.729 [2024-12-10 14:29:28.464787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.729 [2024-12-10 14:29:28.464805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:03.729 [2024-12-10 14:29:28.464814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:03.729 [2024-12-10 14:29:28.464822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.729 [2024-12-10 14:29:28.464832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:03.729 [2024-12-10 14:29:28.464841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:03.729 [2024-12-10 14:29:28.464850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:03.729 [2024-12-10 14:29:28.464869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:03.729 [2024-12-10 14:29:28.464878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:03.729 [2024-12-10 14:29:28.464899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:03.729 [2024-12-10 14:29:28.464915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.729 [2024-12-10 14:29:28.464935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:03.729 [2024-12-10 14:29:28.464945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:03.729 [2024-12-10 14:29:28.464955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:03.729 [2024-12-10 14:29:28.464965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:03.729 [2024-12-10 14:29:28.464974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:03.729 [2024-12-10 14:29:28.464984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:03.729 [2024-12-10 14:29:28.464996] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:03.729 [2024-12-10 14:29:28.465009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:03.729 [2024-12-10 14:29:28.465031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:03.729 [2024-12-10 14:29:28.465041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:03.729 [2024-12-10 14:29:28.465052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:03.729 [2024-12-10 14:29:28.465062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:03.729 [2024-12-10 14:29:28.465073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:03.729 [2024-12-10 14:29:28.465084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:03.729 [2024-12-10 14:29:28.465094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:03.729 [2024-12-10 14:29:28.465104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:03.729 [2024-12-10 14:29:28.465115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:03.729 [2024-12-10 14:29:28.465166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:03.729 [2024-12-10 14:29:28.465178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:03.729 [2024-12-10 14:29:28.465200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:03.729 [2024-12-10 14:29:28.465210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:03.729 [2024-12-10 14:29:28.465221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:03.729 [2024-12-10 14:29:28.465232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.730 [2024-12-10 14:29:28.465247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:03.730 [2024-12-10 14:29:28.465258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:24:03.730 [2024-12-10 14:29:28.465269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.730 [2024-12-10 14:29:28.515180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.730 [2024-12-10 14:29:28.515218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.730 [2024-12-10 14:29:28.515233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.925 ms 00:24:03.730 [2024-12-10 14:29:28.515244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.730 [2024-12-10 14:29:28.515410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.730 [2024-12-10 14:29:28.515424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:03.730 [2024-12-10 14:29:28.515436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:03.730 [2024-12-10 14:29:28.515447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.989 [2024-12-10 14:29:28.593984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.989 [2024-12-10 14:29:28.594033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.989 [2024-12-10 14:29:28.594049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.638 ms 00:24:03.989 [2024-12-10 14:29:28.594060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.989 [2024-12-10 14:29:28.594161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.989 [2024-12-10 14:29:28.594176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.989 [2024-12-10 14:29:28.594188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:03.989 [2024-12-10 14:29:28.594200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.989 [2024-12-10 14:29:28.594977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.989 [2024-12-10 14:29:28.595001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.989 [2024-12-10 14:29:28.595023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:24:03.989 [2024-12-10 14:29:28.595033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.989 [2024-12-10 14:29:28.595171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.989 [2024-12-10 14:29:28.595185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.989 [2024-12-10 14:29:28.595196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:03.989 [2024-12-10 14:29:28.595208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.989 [2024-12-10 14:29:28.619198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.989 [2024-12-10 14:29:28.619236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.989 [2024-12-10 14:29:28.619251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.004 ms 00:24:03.990 [2024-12-10 14:29:28.619263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.639792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:03.990 [2024-12-10 14:29:28.639842] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:03.990 [2024-12-10 14:29:28.639875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.639887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:03.990 [2024-12-10 14:29:28.639901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.491 ms 00:24:03.990 [2024-12-10 14:29:28.639912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.669613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.669654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:03.990 [2024-12-10 14:29:28.669677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.663 ms 00:24:03.990 [2024-12-10 14:29:28.669689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.688402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.688442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:03.990 [2024-12-10 14:29:28.688456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.638 ms 00:24:03.990 [2024-12-10 14:29:28.688466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.706894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.706932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:03.990 [2024-12-10 14:29:28.706946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.375 ms 00:24:03.990 [2024-12-10 14:29:28.706957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.707836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.707864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:03.990 [2024-12-10 14:29:28.707878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:24:03.990 [2024-12-10 14:29:28.707889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.804836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.990 [2024-12-10 14:29:28.804924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:03.990 [2024-12-10 14:29:28.804944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.070 ms 00:24:03.990 [2024-12-10 14:29:28.804956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.990 [2024-12-10 14:29:28.815361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:04.249 [2024-12-10 14:29:28.840079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.249 [2024-12-10 14:29:28.840138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.249 [2024-12-10 14:29:28.840155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.056 ms 00:24:04.249 [2024-12-10 14:29:28.840192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.249 [2024-12-10 14:29:28.840353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.249 [2024-12-10 14:29:28.840368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:04.249 [2024-12-10 14:29:28.840380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:04.249 [2024-12-10 14:29:28.840391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.249 [2024-12-10 14:29:28.840467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.249 [2024-12-10 14:29:28.840479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.249 [2024-12-10 14:29:28.840491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:04.249 [2024-12-10 14:29:28.840507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.249 [2024-12-10 14:29:28.840568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.250 [2024-12-10 14:29:28.840583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.250 [2024-12-10 14:29:28.840595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:04.250 [2024-12-10 14:29:28.840606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.250 [2024-12-10 14:29:28.840653] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:04.250 [2024-12-10 14:29:28.840667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.250 [2024-12-10 14:29:28.840679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:04.250 [2024-12-10 14:29:28.840690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:04.250 [2024-12-10 14:29:28.840724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.250 [2024-12-10 14:29:28.876177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.250 [2024-12-10 14:29:28.876219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.250 [2024-12-10 14:29:28.876234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.483 ms 00:24:04.250 [2024-12-10 14:29:28.876246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.250 [2024-12-10 14:29:28.876385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.250 [2024-12-10 14:29:28.876400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.250 [2024-12-10 14:29:28.876411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:04.250 [2024-12-10 14:29:28.876422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.250 [2024-12-10 14:29:28.877805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:04.250 [2024-12-10 14:29:28.882016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 459.764 ms, result 0 00:24:04.250 [2024-12-10 14:29:28.882937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.250 [2024-12-10 14:29:28.901006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:05.188  [2024-12-10T14:29:30.961Z] Copying: 27/256 [MB] (27 MBps) [2024-12-10T14:29:32.339Z] Copying: 51/256 [MB] (23 MBps) [2024-12-10T14:29:32.907Z] Copying: 75/256 [MB] (24 MBps) [2024-12-10T14:29:34.287Z] Copying: 99/256 [MB] (23 MBps) [2024-12-10T14:29:35.229Z] Copying: 123/256 [MB] (23 MBps) [2024-12-10T14:29:36.180Z] Copying: 146/256 [MB] (23 MBps) [2024-12-10T14:29:37.120Z] Copying: 170/256 [MB] (23 MBps) [2024-12-10T14:29:38.058Z] Copying: 192/256 [MB] (22 MBps) [2024-12-10T14:29:38.997Z] Copying: 214/256 [MB] (21 MBps) [2024-12-10T14:29:39.935Z] Copying: 236/256 [MB] (21 MBps) [2024-12-10T14:29:39.935Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-10 14:29:39.779831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:15.101 [2024-12-10 14:29:39.793747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.793794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:15.101 [2024-12-10 14:29:39.793820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:15.101 [2024-12-10 14:29:39.793832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.793858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:15.101 [2024-12-10 14:29:39.797663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.797705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:15.101 [2024-12-10 14:29:39.797720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.792 ms 00:24:15.101 [2024-12-10 14:29:39.797731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.797959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.797975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:15.101 [2024-12-10 14:29:39.797987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:24:15.101 [2024-12-10 14:29:39.797999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.800643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.800676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:15.101 [2024-12-10 14:29:39.800689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:24:15.101 [2024-12-10 14:29:39.800700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.805882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.805922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:15.101 [2024-12-10 14:29:39.805935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:24:15.101 [2024-12-10 14:29:39.805946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.839283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.839328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:15.101 [2024-12-10 14:29:39.839344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.319 ms 00:24:15.101 [2024-12-10 14:29:39.839355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.859435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.859478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:15.101 [2024-12-10 14:29:39.859502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.050 ms 00:24:15.101 [2024-12-10 14:29:39.859514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.859648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.859665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:15.101 [2024-12-10 14:29:39.859701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:15.101 [2024-12-10 14:29:39.859714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.893702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.893747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:15.101 [2024-12-10 14:29:39.893761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.022 ms 00:24:15.101 [2024-12-10 14:29:39.893772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.101 [2024-12-10 14:29:39.927452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.101 [2024-12-10 14:29:39.927495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:15.101 [2024-12-10 14:29:39.927510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.670 ms 00:24:15.101 [2024-12-10 14:29:39.927521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.362 [2024-12-10 14:29:39.960462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.362 [2024-12-10 14:29:39.960506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:15.362 [2024-12-10 14:29:39.960520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.916 ms 00:24:15.362 [2024-12-10 14:29:39.960532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.362 [2024-12-10 14:29:39.993699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.362 [2024-12-10 14:29:39.993743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:15.362 [2024-12-10 14:29:39.993757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.133 ms 00:24:15.362 [2024-12-10 14:29:39.993769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.362 [2024-12-10 14:29:39.993828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:15.362 [2024-12-10 14:29:39.993847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.993993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:15.362 [2024-12-10 14:29:39.994406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.994990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.995001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.995013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.995024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:15.363 [2024-12-10 14:29:39.995042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:15.363 [2024-12-10 14:29:39.995053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:24:15.363 [2024-12-10 14:29:39.995064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:15.363 [2024-12-10 14:29:39.995075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:15.363 [2024-12-10 14:29:39.995085] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:15.363 [2024-12-10 14:29:39.995096] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:15.363 [2024-12-10 14:29:39.995106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:15.363 [2024-12-10 14:29:39.995119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:15.363 [2024-12-10 14:29:39.995135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:15.363 [2024-12-10 14:29:39.995145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:15.363 [2024-12-10 14:29:39.995155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:15.363 [2024-12-10 14:29:39.995166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.363 [2024-12-10 14:29:39.995177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:15.363 [2024-12-10 14:29:39.995189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:24:15.363 [2024-12-10 14:29:39.995202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.014826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.363 [2024-12-10 14:29:40.014866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:15.363 [2024-12-10 14:29:40.014880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.633 ms 00:24:15.363 [2024-12-10 14:29:40.014891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.015468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.363 [2024-12-10 14:29:40.015496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:15.363 [2024-12-10 14:29:40.015508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:24:15.363 [2024-12-10 14:29:40.015519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.069719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.363 [2024-12-10 14:29:40.069775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.363 [2024-12-10 14:29:40.069793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.363 [2024-12-10 14:29:40.069811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.069906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.363 [2024-12-10 14:29:40.069920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.363 [2024-12-10 14:29:40.069933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.363 [2024-12-10 14:29:40.069944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.070003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.363 [2024-12-10 14:29:40.070017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.363 [2024-12-10 14:29:40.070029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.363 [2024-12-10 14:29:40.070041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.070067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.363 [2024-12-10 14:29:40.070080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.363 [2024-12-10 14:29:40.070092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.363 [2024-12-10 14:29:40.070104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.363 [2024-12-10 14:29:40.187779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.363 [2024-12-10 14:29:40.187839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.363 [2024-12-10 14:29:40.187855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.363 [2024-12-10 14:29:40.187867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.282861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.282916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.623 [2024-12-10 14:29:40.282931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.282943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.623 [2024-12-10 14:29:40.283028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.623 [2024-12-10 14:29:40.283103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.623 [2024-12-10 14:29:40.283266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:15.623 [2024-12-10 14:29:40.283350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.623 [2024-12-10 14:29:40.283427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.623 [2024-12-10 14:29:40.283501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.623 [2024-12-10 14:29:40.283513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.623 [2024-12-10 14:29:40.283526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.623 [2024-12-10 14:29:40.283668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.707 ms, result 0 00:24:16.562 00:24:16.562 00:24:16.562 14:29:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:16.562 14:29:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:17.131 14:29:41 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:17.131 [2024-12-10 14:29:41.869227] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:24:17.131 [2024-12-10 14:29:41.869376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:24:17.390 [2024-12-10 14:29:42.052050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.390 [2024-12-10 14:29:42.156645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.961 [2024-12-10 14:29:42.500025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.961 [2024-12-10 14:29:42.500105] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.961 [2024-12-10 14:29:42.662347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.662403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:17.961 [2024-12-10 14:29:42.662420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:17.961 [2024-12-10 14:29:42.662432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.665435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.665491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:17.961 [2024-12-10 14:29:42.665506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:24:17.961 [2024-12-10 14:29:42.665517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.665619] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:17.961 [2024-12-10 14:29:42.666536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:17.961 [2024-12-10 14:29:42.666576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.666588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:17.961 [2024-12-10 14:29:42.666601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:24:17.961 [2024-12-10 14:29:42.666612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.668262] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:17.961 [2024-12-10 14:29:42.686799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.686856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:17.961 [2024-12-10 14:29:42.686873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.568 ms 00:24:17.961 [2024-12-10 14:29:42.686885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.686995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.687012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:17.961 [2024-12-10 14:29:42.687025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:17.961 [2024-12-10 14:29:42.687037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.693929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.693960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:17.961 [2024-12-10 14:29:42.693973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:24:17.961 [2024-12-10 14:29:42.693985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.694085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.694102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:17.961 [2024-12-10 14:29:42.694115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:17.961 [2024-12-10 14:29:42.694127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.694162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.694175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:17.961 [2024-12-10 14:29:42.694187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:17.961 [2024-12-10 14:29:42.694198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.694222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:17.961 [2024-12-10 14:29:42.698895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.698933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:17.961 [2024-12-10 14:29:42.698946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:24:17.961 [2024-12-10 14:29:42.698957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.699031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.699046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:17.961 [2024-12-10 14:29:42.699057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:17.961 [2024-12-10 14:29:42.699069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.699099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:17.961 [2024-12-10 14:29:42.699126] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:17.961 [2024-12-10 14:29:42.699161] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:17.961 [2024-12-10 14:29:42.699180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:17.961 [2024-12-10 14:29:42.699265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:17.961 [2024-12-10 14:29:42.699281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:17.961 [2024-12-10 14:29:42.699296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:17.961 [2024-12-10 14:29:42.699315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:17.961 [2024-12-10 14:29:42.699328] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:17.961 [2024-12-10 14:29:42.699342] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:17.961 [2024-12-10 14:29:42.699353] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:17.961 [2024-12-10 14:29:42.699365] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:17.961 [2024-12-10 14:29:42.699375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:17.961 [2024-12-10 14:29:42.699388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.699400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:17.961 [2024-12-10 14:29:42.699412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:17.961 [2024-12-10 14:29:42.699423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.699498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.961 [2024-12-10 14:29:42.699517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:17.961 [2024-12-10 14:29:42.699529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:17.961 [2024-12-10 14:29:42.699540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.961 [2024-12-10 14:29:42.699629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:17.961 [2024-12-10 14:29:42.699655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:17.961 [2024-12-10 14:29:42.699680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.961 [2024-12-10 14:29:42.699694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.961 [2024-12-10 14:29:42.699706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:17.961 [2024-12-10 14:29:42.699717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:17.961 [2024-12-10 14:29:42.699727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:17.961 [2024-12-10 14:29:42.699739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:17.961 [2024-12-10 14:29:42.699750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:17.961 [2024-12-10 14:29:42.699761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.961 [2024-12-10 14:29:42.699772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:17.961 [2024-12-10 14:29:42.699796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:17.961 [2024-12-10 14:29:42.699807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.961 [2024-12-10 14:29:42.699818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:17.962 [2024-12-10 14:29:42.699829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:17.962 [2024-12-10 14:29:42.699839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:17.962 [2024-12-10 14:29:42.699860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:17.962 [2024-12-10 14:29:42.699870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:17.962 [2024-12-10 14:29:42.699891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.962 [2024-12-10 14:29:42.699911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:17.962 [2024-12-10 14:29:42.699921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.962 [2024-12-10 14:29:42.699943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:17.962 [2024-12-10 14:29:42.699953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.962 [2024-12-10 14:29:42.699974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:17.962 [2024-12-10 14:29:42.699984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:17.962 [2024-12-10 14:29:42.699995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.962 [2024-12-10 14:29:42.700005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:17.962 [2024-12-10 14:29:42.700015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:17.962 [2024-12-10 14:29:42.700027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.962 [2024-12-10 14:29:42.700036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:17.962 [2024-12-10 14:29:42.700046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:17.962 [2024-12-10 14:29:42.700056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.962 [2024-12-10 14:29:42.700067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:17.962 [2024-12-10 14:29:42.700077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:17.962 [2024-12-10 14:29:42.700087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.962 [2024-12-10 14:29:42.700096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:17.962 [2024-12-10 14:29:42.700106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:17.962 [2024-12-10 14:29:42.700116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.962 [2024-12-10 14:29:42.700127] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:17.962 [2024-12-10 14:29:42.700140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:17.962 [2024-12-10 14:29:42.700155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.962 [2024-12-10 14:29:42.700166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.962 [2024-12-10 14:29:42.700178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:17.962 [2024-12-10 14:29:42.700189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:17.962 [2024-12-10 14:29:42.700199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:17.962 [2024-12-10 14:29:42.700210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:17.962 [2024-12-10 14:29:42.700220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:17.962 [2024-12-10 14:29:42.700230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:17.962 [2024-12-10 14:29:42.700243] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:17.962 [2024-12-10 14:29:42.700257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:17.962 [2024-12-10 14:29:42.700280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:17.962 [2024-12-10 14:29:42.700291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:17.962 [2024-12-10 14:29:42.700302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:17.962 [2024-12-10 14:29:42.700313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:17.962 [2024-12-10 14:29:42.700324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:17.962 [2024-12-10 14:29:42.700336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:17.962 [2024-12-10 14:29:42.700347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:17.962 [2024-12-10 14:29:42.700358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:17.962 [2024-12-10 14:29:42.700371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:17.962 [2024-12-10 14:29:42.700429] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:17.962 [2024-12-10 14:29:42.700441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:17.962 [2024-12-10 14:29:42.700465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:17.962 [2024-12-10 14:29:42.700477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:17.962 [2024-12-10 14:29:42.700488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:17.962 [2024-12-10 14:29:42.700500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.962 [2024-12-10 14:29:42.700517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:17.962 [2024-12-10 14:29:42.700528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:24:17.962 [2024-12-10 14:29:42.700540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.962 [2024-12-10 14:29:42.734808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.962 [2024-12-10 14:29:42.734851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:17.962 [2024-12-10 14:29:42.734866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.262 ms 00:24:17.962 [2024-12-10 14:29:42.734878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.962 [2024-12-10 14:29:42.734998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.962 [2024-12-10 14:29:42.735015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:17.962 [2024-12-10 14:29:42.735027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:17.962 [2024-12-10 14:29:42.735038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.809804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.809850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.223 [2024-12-10 14:29:42.809871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.860 ms 00:24:18.223 [2024-12-10 14:29:42.809883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.809987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.810003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.223 [2024-12-10 14:29:42.810015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:18.223 [2024-12-10 14:29:42.810027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.810491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.810519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.223 [2024-12-10 14:29:42.810539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:24:18.223 [2024-12-10 14:29:42.810551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.810667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.810708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.223 [2024-12-10 14:29:42.810721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:18.223 [2024-12-10 14:29:42.810733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.831248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.831293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.223 [2024-12-10 14:29:42.831308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.521 ms 00:24:18.223 [2024-12-10 14:29:42.831320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.850169] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:18.223 [2024-12-10 14:29:42.850216] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:18.223 [2024-12-10 14:29:42.850233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.850245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:18.223 [2024-12-10 14:29:42.850258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.836 ms 00:24:18.223 [2024-12-10 14:29:42.850269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.878909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.878957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:18.223 [2024-12-10 14:29:42.878973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.595 ms 00:24:18.223 [2024-12-10 14:29:42.878985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.896280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.896341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:18.223 [2024-12-10 14:29:42.896356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.238 ms 00:24:18.223 [2024-12-10 14:29:42.896367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.913086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.913130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:18.223 [2024-12-10 14:29:42.913144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.662 ms 00:24:18.223 [2024-12-10 14:29:42.913155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.913856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.913895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:18.223 [2024-12-10 14:29:42.913909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:24:18.223 [2024-12-10 14:29:42.913920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:42.998718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:42.998776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:18.223 [2024-12-10 14:29:42.998795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.901 ms 00:24:18.223 [2024-12-10 14:29:42.998807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:43.008663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:18.223 [2024-12-10 14:29:43.024143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:43.024189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:18.223 [2024-12-10 14:29:43.024205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:24:18.223 [2024-12-10 14:29:43.024222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:43.024334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:43.024349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:18.223 [2024-12-10 14:29:43.024363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:18.223 [2024-12-10 14:29:43.024374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:43.024430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:43.024443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:18.223 [2024-12-10 14:29:43.024455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:18.223 [2024-12-10 14:29:43.024471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:43.024509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:43.024524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:18.223 [2024-12-10 14:29:43.024537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:18.223 [2024-12-10 14:29:43.024548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.223 [2024-12-10 14:29:43.024589] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:18.223 [2024-12-10 14:29:43.024603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.223 [2024-12-10 14:29:43.024615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:18.223 [2024-12-10 14:29:43.024627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:18.223 [2024-12-10 14:29:43.024638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.059172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.059220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:18.483 [2024-12-10 14:29:43.059236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.564 ms 00:24:18.483 [2024-12-10 14:29:43.059248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.059366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.059382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:18.483 [2024-12-10 14:29:43.059395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:18.483 [2024-12-10 14:29:43.059407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.060372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.483 [2024-12-10 14:29:43.064422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.396 ms, result 0 00:24:18.483 [2024-12-10 14:29:43.065276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:18.483 [2024-12-10 14:29:43.083289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.483  [2024-12-10T14:29:43.317Z] Copying: 4096/4096 [kB] (average 20 MBps)[2024-12-10 14:29:43.278748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:18.483 [2024-12-10 14:29:43.292072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.292116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:18.483 [2024-12-10 14:29:43.292138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:18.483 [2024-12-10 14:29:43.292150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.292174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:18.483 [2024-12-10 14:29:43.296147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.296180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:18.483 [2024-12-10 14:29:43.296193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:24:18.483 [2024-12-10 14:29:43.296204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.298260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.298301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:18.483 [2024-12-10 14:29:43.298315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:24:18.483 [2024-12-10 14:29:43.298326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.301361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.301398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:18.483 [2024-12-10 14:29:43.301412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:24:18.483 [2024-12-10 14:29:43.301423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.483 [2024-12-10 14:29:43.306634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.483 [2024-12-10 14:29:43.306682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:18.483 [2024-12-10 14:29:43.306695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:24:18.483 [2024-12-10 14:29:43.306706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.340930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.340975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:18.744 [2024-12-10 14:29:43.340990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.211 ms 00:24:18.744 [2024-12-10 14:29:43.341000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.361681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.361733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:18.744 [2024-12-10 14:29:43.361748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.654 ms 00:24:18.744 [2024-12-10 14:29:43.361759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.361875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.361890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:18.744 [2024-12-10 14:29:43.361915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:18.744 [2024-12-10 14:29:43.361926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.396431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.396474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:18.744 [2024-12-10 14:29:43.396489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.539 ms 00:24:18.744 [2024-12-10 14:29:43.396499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.431072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.431115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:18.744 [2024-12-10 14:29:43.431129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.570 ms 00:24:18.744 [2024-12-10 14:29:43.431140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.464964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.465007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:18.744 [2024-12-10 14:29:43.465021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.796 ms 00:24:18.744 [2024-12-10 14:29:43.465032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.498508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.744 [2024-12-10 14:29:43.498551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:18.744 [2024-12-10 14:29:43.498565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.447 ms 00:24:18.744 [2024-12-10 14:29:43.498575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.744 [2024-12-10 14:29:43.498632] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:18.744 [2024-12-10 14:29:43.498652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:18.744 [2024-12-10 14:29:43.498666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:18.744 [2024-12-10 14:29:43.498691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:18.744 [2024-12-10 14:29:43.498703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.498996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:18.745 [2024-12-10 14:29:43.499779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:18.746 [2024-12-10 14:29:43.499866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:18.746 [2024-12-10 14:29:43.499877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:24:18.746 [2024-12-10 14:29:43.499888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:18.746 [2024-12-10 14:29:43.499910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:18.746 [2024-12-10 14:29:43.499920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:18.746 [2024-12-10 14:29:43.499931] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:18.746 [2024-12-10 14:29:43.499942] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:18.746 [2024-12-10 14:29:43.499954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:18.746 [2024-12-10 14:29:43.499968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:18.746 [2024-12-10 14:29:43.499979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:18.746 [2024-12-10 14:29:43.499990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:18.746 [2024-12-10 14:29:43.500000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.746 [2024-12-10 14:29:43.500012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:18.746 [2024-12-10 14:29:43.500024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:24:18.746 [2024-12-10 14:29:43.500035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.517761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.746 [2024-12-10 14:29:43.517799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:18.746 [2024-12-10 14:29:43.517812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.731 ms 00:24:18.746 [2024-12-10 14:29:43.517824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.518346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.746 [2024-12-10 14:29:43.518374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:18.746 [2024-12-10 14:29:43.518386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:24:18.746 [2024-12-10 14:29:43.518397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.568543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.746 [2024-12-10 14:29:43.568583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.746 [2024-12-10 14:29:43.568597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.746 [2024-12-10 14:29:43.568615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.568708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.746 [2024-12-10 14:29:43.568722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.746 [2024-12-10 14:29:43.568737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.746 [2024-12-10 14:29:43.568749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.568801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.746 [2024-12-10 14:29:43.568815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.746 [2024-12-10 14:29:43.568827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.746 [2024-12-10 14:29:43.568838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.746 [2024-12-10 14:29:43.568862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.746 [2024-12-10 14:29:43.568875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.746 [2024-12-10 14:29:43.568886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.746 [2024-12-10 14:29:43.568897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.685015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.685069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:19.006 [2024-12-10 14:29:43.685085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.685103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.777940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.777993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:19.006 [2024-12-10 14:29:43.778008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:19.006 [2024-12-10 14:29:43.778109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:19.006 [2024-12-10 14:29:43.778184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:19.006 [2024-12-10 14:29:43.778329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:19.006 [2024-12-10 14:29:43.778414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:19.006 [2024-12-10 14:29:43.778491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:19.006 [2024-12-10 14:29:43.778566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:19.006 [2024-12-10 14:29:43.778578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:19.006 [2024-12-10 14:29:43.778589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.006 [2024-12-10 14:29:43.778754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.458 ms, result 0 00:24:19.944 00:24:19.944 00:24:20.204 14:29:44 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80070 00:24:20.204 14:29:44 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:20.204 14:29:44 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80070 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 80070 ']' 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.204 14:29:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:20.205 [2024-12-10 14:29:44.923771] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:24:20.205 [2024-12-10 14:29:44.923935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80070 ] 00:24:20.467 [2024-12-10 14:29:45.106282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.467 [2024-12-10 14:29:45.209400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.406 14:29:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.406 14:29:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:21.406 14:29:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:21.667 [2024-12-10 14:29:46.240801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.667 [2024-12-10 14:29:46.240877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.667 [2024-12-10 14:29:46.423200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.423256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:21.667 [2024-12-10 14:29:46.423280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:21.667 [2024-12-10 14:29:46.423292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.426870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.426922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.667 [2024-12-10 14:29:46.426947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:24:21.667 [2024-12-10 14:29:46.426959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.427074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:21.667 [2024-12-10 14:29:46.428044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:21.667 [2024-12-10 14:29:46.428084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.428096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.667 [2024-12-10 14:29:46.428111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:24:21.667 [2024-12-10 14:29:46.428123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.429658] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:21.667 [2024-12-10 14:29:46.448839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.448898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:21.667 [2024-12-10 14:29:46.448915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.216 ms 00:24:21.667 [2024-12-10 14:29:46.448933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.449045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.449068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:21.667 [2024-12-10 14:29:46.449082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:21.667 [2024-12-10 14:29:46.449099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.455908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.455956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.667 [2024-12-10 14:29:46.455970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:24:21.667 [2024-12-10 14:29:46.455988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.456128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.456151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.667 [2024-12-10 14:29:46.456164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:21.667 [2024-12-10 14:29:46.456190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.456220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.456238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:21.667 [2024-12-10 14:29:46.456250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:21.667 [2024-12-10 14:29:46.456267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.456294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:21.667 [2024-12-10 14:29:46.460776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.460813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.667 [2024-12-10 14:29:46.460833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.488 ms 00:24:21.667 [2024-12-10 14:29:46.460845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.460924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.667 [2024-12-10 14:29:46.460939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:21.667 [2024-12-10 14:29:46.460954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:21.667 [2024-12-10 14:29:46.460974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.667 [2024-12-10 14:29:46.461004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:21.667 [2024-12-10 14:29:46.461033] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:21.667 [2024-12-10 14:29:46.461088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:21.667 [2024-12-10 14:29:46.461110] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:21.668 [2024-12-10 14:29:46.461202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:21.668 [2024-12-10 14:29:46.461218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:21.668 [2024-12-10 14:29:46.461246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:21.668 [2024-12-10 14:29:46.461261] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461282] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:21.668 [2024-12-10 14:29:46.461312] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:21.668 [2024-12-10 14:29:46.461324] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:21.668 [2024-12-10 14:29:46.461346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:21.668 [2024-12-10 14:29:46.461359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.668 [2024-12-10 14:29:46.461378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:21.668 [2024-12-10 14:29:46.461390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:24:21.668 [2024-12-10 14:29:46.461407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.668 [2024-12-10 14:29:46.461494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.668 [2024-12-10 14:29:46.461513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:21.668 [2024-12-10 14:29:46.461526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:21.668 [2024-12-10 14:29:46.461544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.668 [2024-12-10 14:29:46.461630] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:21.668 [2024-12-10 14:29:46.461652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:21.668 [2024-12-10 14:29:46.461665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:21.668 [2024-12-10 14:29:46.461735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:21.668 [2024-12-10 14:29:46.461781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.668 [2024-12-10 14:29:46.461810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:21.668 [2024-12-10 14:29:46.461829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:21.668 [2024-12-10 14:29:46.461840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.668 [2024-12-10 14:29:46.461856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:21.668 [2024-12-10 14:29:46.461869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:21.668 [2024-12-10 14:29:46.461885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:21.668 [2024-12-10 14:29:46.461914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:21.668 [2024-12-10 14:29:46.461967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:21.668 [2024-12-10 14:29:46.461984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.668 [2024-12-10 14:29:46.461995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:21.668 [2024-12-10 14:29:46.462016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.668 [2024-12-10 14:29:46.462044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:21.668 [2024-12-10 14:29:46.462055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.668 [2024-12-10 14:29:46.462083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:21.668 [2024-12-10 14:29:46.462101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.668 [2024-12-10 14:29:46.462129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:21.668 [2024-12-10 14:29:46.462141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.668 [2024-12-10 14:29:46.462169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:21.668 [2024-12-10 14:29:46.462185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:21.668 [2024-12-10 14:29:46.462197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.668 [2024-12-10 14:29:46.462213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:21.668 [2024-12-10 14:29:46.462223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:21.668 [2024-12-10 14:29:46.462243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:21.668 [2024-12-10 14:29:46.462274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:21.668 [2024-12-10 14:29:46.462285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:21.668 [2024-12-10 14:29:46.462318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:21.668 [2024-12-10 14:29:46.462335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.668 [2024-12-10 14:29:46.462346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.668 [2024-12-10 14:29:46.462363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:21.668 [2024-12-10 14:29:46.462375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:21.668 [2024-12-10 14:29:46.462391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:21.668 [2024-12-10 14:29:46.462403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:21.668 [2024-12-10 14:29:46.462419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:21.668 [2024-12-10 14:29:46.462430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:21.668 [2024-12-10 14:29:46.462447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:21.668 [2024-12-10 14:29:46.462462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:21.668 [2024-12-10 14:29:46.462499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:21.668 [2024-12-10 14:29:46.462513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:21.668 [2024-12-10 14:29:46.462525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:21.668 [2024-12-10 14:29:46.462540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:21.668 [2024-12-10 14:29:46.462552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:21.668 [2024-12-10 14:29:46.462567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:21.668 [2024-12-10 14:29:46.462579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:21.668 [2024-12-10 14:29:46.462593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:21.668 [2024-12-10 14:29:46.462605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:21.668 [2024-12-10 14:29:46.462730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:21.668 [2024-12-10 14:29:46.462745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:21.668 [2024-12-10 14:29:46.462780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:21.668 [2024-12-10 14:29:46.462798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:21.668 [2024-12-10 14:29:46.462810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:21.668 [2024-12-10 14:29:46.462830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.668 [2024-12-10 14:29:46.462843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:21.668 [2024-12-10 14:29:46.462865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:24:21.669 [2024-12-10 14:29:46.462882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.504785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.504824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.929 [2024-12-10 14:29:46.504842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.900 ms 00:24:21.929 [2024-12-10 14:29:46.504857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.504971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.504987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:21.929 [2024-12-10 14:29:46.505002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:21.929 [2024-12-10 14:29:46.505014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.555439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.555486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.929 [2024-12-10 14:29:46.555508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.473 ms 00:24:21.929 [2024-12-10 14:29:46.555520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.555615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.555630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.929 [2024-12-10 14:29:46.555648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:21.929 [2024-12-10 14:29:46.555661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.556130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.556153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.929 [2024-12-10 14:29:46.556171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:24:21.929 [2024-12-10 14:29:46.556183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.556308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.556323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.929 [2024-12-10 14:29:46.556341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:21.929 [2024-12-10 14:29:46.556353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.578589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.578628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.929 [2024-12-10 14:29:46.578648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.239 ms 00:24:21.929 [2024-12-10 14:29:46.578661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.622278] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:21.929 [2024-12-10 14:29:46.622326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:21.929 [2024-12-10 14:29:46.622348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.622362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:21.929 [2024-12-10 14:29:46.622379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.630 ms 00:24:21.929 [2024-12-10 14:29:46.622402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.650540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.650596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:21.929 [2024-12-10 14:29:46.650617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.085 ms 00:24:21.929 [2024-12-10 14:29:46.650629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.667840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.667882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:21.929 [2024-12-10 14:29:46.667902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.137 ms 00:24:21.929 [2024-12-10 14:29:46.667914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.684622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.684664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:21.929 [2024-12-10 14:29:46.684690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.632 ms 00:24:21.929 [2024-12-10 14:29:46.684702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.929 [2024-12-10 14:29:46.685371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.929 [2024-12-10 14:29:46.685405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:21.929 [2024-12-10 14:29:46.685422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:24:21.929 [2024-12-10 14:29:46.685434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.770697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.770753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.189 [2024-12-10 14:29:46.770774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.357 ms 00:24:22.189 [2024-12-10 14:29:46.770787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.780684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:22.189 [2024-12-10 14:29:46.796342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.796395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.189 [2024-12-10 14:29:46.796414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.482 ms 00:24:22.189 [2024-12-10 14:29:46.796429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.796517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.796536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.189 [2024-12-10 14:29:46.796549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.189 [2024-12-10 14:29:46.796564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.796616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.796631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.189 [2024-12-10 14:29:46.796643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:22.189 [2024-12-10 14:29:46.796661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.796707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.796724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.189 [2024-12-10 14:29:46.796737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.189 [2024-12-10 14:29:46.796751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.796790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.189 [2024-12-10 14:29:46.796809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.796825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.189 [2024-12-10 14:29:46.796849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:22.189 [2024-12-10 14:29:46.796862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.831662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.831721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.189 [2024-12-10 14:29:46.831744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.811 ms 00:24:22.189 [2024-12-10 14:29:46.831757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.831876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.189 [2024-12-10 14:29:46.831894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.189 [2024-12-10 14:29:46.831913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:22.189 [2024-12-10 14:29:46.831931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.189 [2024-12-10 14:29:46.832885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.189 [2024-12-10 14:29:46.836903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.017 ms, result 0 00:24:22.189 [2024-12-10 14:29:46.838040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.189 Some configs were skipped because the RPC state that can call them passed over. 00:24:22.189 14:29:46 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:22.449 [2024-12-10 14:29:47.076754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.449 [2024-12-10 14:29:47.076947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:22.449 [2024-12-10 14:29:47.077039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:24:22.449 [2024-12-10 14:29:47.077096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.449 [2024-12-10 14:29:47.077166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.986 ms, result 0 00:24:22.449 true 00:24:22.449 14:29:47 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:22.708 [2024-12-10 14:29:47.288202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.708 [2024-12-10 14:29:47.288362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:22.708 [2024-12-10 14:29:47.288390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:24:22.708 [2024-12-10 14:29:47.288403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.708 [2024-12-10 14:29:47.288467] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.401 ms, result 0 00:24:22.708 true 00:24:22.708 14:29:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80070 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80070 ']' 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80070 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80070 00:24:22.708 killing process with pid 80070 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80070' 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 80070 00:24:22.708 14:29:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 80070 00:24:23.647 [2024-12-10 14:29:48.407227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.407296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.647 [2024-12-10 14:29:48.407313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:23.647 [2024-12-10 14:29:48.407328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.407376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:23.647 [2024-12-10 14:29:48.411518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.411559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.647 [2024-12-10 14:29:48.411579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:24:23.647 [2024-12-10 14:29:48.411590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.411856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.411874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.647 [2024-12-10 14:29:48.411889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:23.647 [2024-12-10 14:29:48.411901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.415085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.415128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.647 [2024-12-10 14:29:48.415149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:24:23.647 [2024-12-10 14:29:48.415161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.420403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.420447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:23.647 [2024-12-10 14:29:48.420467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.202 ms 00:24:23.647 [2024-12-10 14:29:48.420479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.434518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.434781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.647 [2024-12-10 14:29:48.434814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.994 ms 00:24:23.647 [2024-12-10 14:29:48.434827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.445467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.445512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.647 [2024-12-10 14:29:48.445531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.557 ms 00:24:23.647 [2024-12-10 14:29:48.445543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.445698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.445716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.647 [2024-12-10 14:29:48.445731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:23.647 [2024-12-10 14:29:48.445742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.460635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.460686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.647 [2024-12-10 14:29:48.460710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.885 ms 00:24:23.647 [2024-12-10 14:29:48.460721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.647 [2024-12-10 14:29:48.475203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.647 [2024-12-10 14:29:48.475243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.647 [2024-12-10 14:29:48.475271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.438 ms 00:24:23.647 [2024-12-10 14:29:48.475283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.908 [2024-12-10 14:29:48.488884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.908 [2024-12-10 14:29:48.488923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.908 [2024-12-10 14:29:48.488945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.556 ms 00:24:23.908 [2024-12-10 14:29:48.488957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.908 [2024-12-10 14:29:48.502396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.908 [2024-12-10 14:29:48.502436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.908 [2024-12-10 14:29:48.502457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.378 ms 00:24:23.908 [2024-12-10 14:29:48.502469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.908 [2024-12-10 14:29:48.502518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.908 [2024-12-10 14:29:48.502536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.502993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.908 [2024-12-10 14:29:48.503233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.503992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.909 [2024-12-10 14:29:48.504200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.909 [2024-12-10 14:29:48.504231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:24:23.909 [2024-12-10 14:29:48.504250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.909 [2024-12-10 14:29:48.504268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.909 [2024-12-10 14:29:48.504280] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.909 [2024-12-10 14:29:48.504298] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.909 [2024-12-10 14:29:48.504310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.909 [2024-12-10 14:29:48.504327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.909 [2024-12-10 14:29:48.504339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.909 [2024-12-10 14:29:48.504355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.909 [2024-12-10 14:29:48.504367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.909 [2024-12-10 14:29:48.504384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.909 [2024-12-10 14:29:48.504396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.909 [2024-12-10 14:29:48.504413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:24:23.909 [2024-12-10 14:29:48.504427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.523031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.909 [2024-12-10 14:29:48.523070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.909 [2024-12-10 14:29:48.523095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:24:23.909 [2024-12-10 14:29:48.523107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.523622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.909 [2024-12-10 14:29:48.523643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.909 [2024-12-10 14:29:48.523692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:24:23.909 [2024-12-10 14:29:48.523706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.587921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.909 [2024-12-10 14:29:48.587961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.909 [2024-12-10 14:29:48.587982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.909 [2024-12-10 14:29:48.587995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.588083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.909 [2024-12-10 14:29:48.588097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.909 [2024-12-10 14:29:48.588124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.909 [2024-12-10 14:29:48.588136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.588196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.909 [2024-12-10 14:29:48.588211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.909 [2024-12-10 14:29:48.588233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.909 [2024-12-10 14:29:48.588245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.588272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.909 [2024-12-10 14:29:48.588284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.909 [2024-12-10 14:29:48.588315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.909 [2024-12-10 14:29:48.588330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.909 [2024-12-10 14:29:48.702590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.909 [2024-12-10 14:29:48.702866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.909 [2024-12-10 14:29:48.702901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.909 [2024-12-10 14:29:48.702915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.169 [2024-12-10 14:29:48.797321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.169 [2024-12-10 14:29:48.797468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.169 [2024-12-10 14:29:48.797550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.169 [2024-12-10 14:29:48.797744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.169 [2024-12-10 14:29:48.797862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.797926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.797940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.169 [2024-12-10 14:29:48.797963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.797974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.798026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.169 [2024-12-10 14:29:48.798040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.169 [2024-12-10 14:29:48.798058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.169 [2024-12-10 14:29:48.798071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.169 [2024-12-10 14:29:48.798224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.602 ms, result 0 00:24:25.107 14:29:49 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.107 [2024-12-10 14:29:49.872568] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:24:25.107 [2024-12-10 14:29:49.872919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80135 ] 00:24:25.366 [2024-12-10 14:29:50.059496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.366 [2024-12-10 14:29:50.163135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.936 [2024-12-10 14:29:50.538874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.936 [2024-12-10 14:29:50.538951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.936 [2024-12-10 14:29:50.700925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.700978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:25.936 [2024-12-10 14:29:50.700996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:25.936 [2024-12-10 14:29:50.701008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.703963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.704008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.936 [2024-12-10 14:29:50.704022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:24:25.936 [2024-12-10 14:29:50.704033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.704134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:25.936 [2024-12-10 14:29:50.705046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:25.936 [2024-12-10 14:29:50.705087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.705099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.936 [2024-12-10 14:29:50.705112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:24:25.936 [2024-12-10 14:29:50.705123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.706787] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:25.936 [2024-12-10 14:29:50.725421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.725473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:25.936 [2024-12-10 14:29:50.725490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.665 ms 00:24:25.936 [2024-12-10 14:29:50.725501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.725610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.725626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:25.936 [2024-12-10 14:29:50.725640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:25.936 [2024-12-10 14:29:50.725651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.732521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.732553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.936 [2024-12-10 14:29:50.732567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:24:25.936 [2024-12-10 14:29:50.732578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.732692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.732709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.936 [2024-12-10 14:29:50.732723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:25.936 [2024-12-10 14:29:50.732734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.732770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.732782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:25.936 [2024-12-10 14:29:50.732794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:25.936 [2024-12-10 14:29:50.732805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.732830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:25.936 [2024-12-10 14:29:50.737893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.738059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.936 [2024-12-10 14:29:50.738190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.076 ms 00:24:25.936 [2024-12-10 14:29:50.738233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.738340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.738457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:25.936 [2024-12-10 14:29:50.738499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:25.936 [2024-12-10 14:29:50.738534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.738647] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:25.936 [2024-12-10 14:29:50.738727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:25.936 [2024-12-10 14:29:50.738873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:25.936 [2024-12-10 14:29:50.738937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:25.936 [2024-12-10 14:29:50.739121] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:25.936 [2024-12-10 14:29:50.739183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:25.936 [2024-12-10 14:29:50.739296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:25.936 [2024-12-10 14:29:50.739364] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:25.936 [2024-12-10 14:29:50.739464] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:25.936 [2024-12-10 14:29:50.739506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:25.936 [2024-12-10 14:29:50.739519] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:25.936 [2024-12-10 14:29:50.739531] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:25.936 [2024-12-10 14:29:50.739544] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:25.936 [2024-12-10 14:29:50.739558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.739570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:25.936 [2024-12-10 14:29:50.739583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:24:25.936 [2024-12-10 14:29:50.739596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.739703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.936 [2024-12-10 14:29:50.739726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:25.936 [2024-12-10 14:29:50.739738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:25.936 [2024-12-10 14:29:50.739750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.936 [2024-12-10 14:29:50.739846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:25.936 [2024-12-10 14:29:50.739864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:25.936 [2024-12-10 14:29:50.739876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.936 [2024-12-10 14:29:50.739889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.936 [2024-12-10 14:29:50.739901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:25.936 [2024-12-10 14:29:50.739912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:25.936 [2024-12-10 14:29:50.739923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:25.936 [2024-12-10 14:29:50.739936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:25.936 [2024-12-10 14:29:50.739947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:25.936 [2024-12-10 14:29:50.739958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.936 [2024-12-10 14:29:50.739969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:25.936 [2024-12-10 14:29:50.739991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:25.936 [2024-12-10 14:29:50.740003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.936 [2024-12-10 14:29:50.740014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:25.936 [2024-12-10 14:29:50.740027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:25.936 [2024-12-10 14:29:50.740038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.936 [2024-12-10 14:29:50.740049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:25.936 [2024-12-10 14:29:50.740061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:25.936 [2024-12-10 14:29:50.740073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.936 [2024-12-10 14:29:50.740085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:25.936 [2024-12-10 14:29:50.740097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:25.936 [2024-12-10 14:29:50.740109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.936 [2024-12-10 14:29:50.740120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:25.936 [2024-12-10 14:29:50.740132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:25.936 [2024-12-10 14:29:50.740142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.936 [2024-12-10 14:29:50.740154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:25.937 [2024-12-10 14:29:50.740166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.937 [2024-12-10 14:29:50.740190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:25.937 [2024-12-10 14:29:50.740201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.937 [2024-12-10 14:29:50.740234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:25.937 [2024-12-10 14:29:50.740246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.937 [2024-12-10 14:29:50.740268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:25.937 [2024-12-10 14:29:50.740280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:25.937 [2024-12-10 14:29:50.740291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.937 [2024-12-10 14:29:50.740302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:25.937 [2024-12-10 14:29:50.740314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:25.937 [2024-12-10 14:29:50.740324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:25.937 [2024-12-10 14:29:50.740347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:25.937 [2024-12-10 14:29:50.740359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740369] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:25.937 [2024-12-10 14:29:50.740381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:25.937 [2024-12-10 14:29:50.740396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.937 [2024-12-10 14:29:50.740408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.937 [2024-12-10 14:29:50.740420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:25.937 [2024-12-10 14:29:50.740431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:25.937 [2024-12-10 14:29:50.740442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:25.937 [2024-12-10 14:29:50.740453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:25.937 [2024-12-10 14:29:50.740463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:25.937 [2024-12-10 14:29:50.740475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:25.937 [2024-12-10 14:29:50.740489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:25.937 [2024-12-10 14:29:50.740503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:25.937 [2024-12-10 14:29:50.740529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:25.937 [2024-12-10 14:29:50.740542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:25.937 [2024-12-10 14:29:50.740554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:25.937 [2024-12-10 14:29:50.740567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:25.937 [2024-12-10 14:29:50.740580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:25.937 [2024-12-10 14:29:50.740592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:25.937 [2024-12-10 14:29:50.740604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:25.937 [2024-12-10 14:29:50.740616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:25.937 [2024-12-10 14:29:50.740628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:25.937 [2024-12-10 14:29:50.740715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:25.937 [2024-12-10 14:29:50.740728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:25.937 [2024-12-10 14:29:50.740754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:25.937 [2024-12-10 14:29:50.740766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:25.937 [2024-12-10 14:29:50.740778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:25.937 [2024-12-10 14:29:50.740791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.937 [2024-12-10 14:29:50.740808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:25.937 [2024-12-10 14:29:50.740820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:24:25.937 [2024-12-10 14:29:50.740831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.777873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.777913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.197 [2024-12-10 14:29:50.777930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.037 ms 00:24:26.197 [2024-12-10 14:29:50.777941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.778058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.778073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.197 [2024-12-10 14:29:50.778085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:26.197 [2024-12-10 14:29:50.778097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.844716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.844759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.197 [2024-12-10 14:29:50.844779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.702 ms 00:24:26.197 [2024-12-10 14:29:50.844791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.844891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.844907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.197 [2024-12-10 14:29:50.844920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.197 [2024-12-10 14:29:50.844931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.845382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.845398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.197 [2024-12-10 14:29:50.845417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:24:26.197 [2024-12-10 14:29:50.845428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.845555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.845571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.197 [2024-12-10 14:29:50.845584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:26.197 [2024-12-10 14:29:50.845596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.863954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.863993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.197 [2024-12-10 14:29:50.864007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.363 ms 00:24:26.197 [2024-12-10 14:29:50.864019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.881767] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:26.197 [2024-12-10 14:29:50.881812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.197 [2024-12-10 14:29:50.881829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.881841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.197 [2024-12-10 14:29:50.881854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.733 ms 00:24:26.197 [2024-12-10 14:29:50.881865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.909931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.909989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.197 [2024-12-10 14:29:50.910005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.022 ms 00:24:26.197 [2024-12-10 14:29:50.910017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.926817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.926858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.197 [2024-12-10 14:29:50.926873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.740 ms 00:24:26.197 [2024-12-10 14:29:50.926884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.943777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.943819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.197 [2024-12-10 14:29:50.943833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.836 ms 00:24:26.197 [2024-12-10 14:29:50.943844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.197 [2024-12-10 14:29:50.944524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.197 [2024-12-10 14:29:50.944554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.197 [2024-12-10 14:29:50.944568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:24:26.197 [2024-12-10 14:29:50.944579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.029856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.029914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.457 [2024-12-10 14:29:51.029932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.383 ms 00:24:26.457 [2024-12-10 14:29:51.029945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.040009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:26.457 [2024-12-10 14:29:51.055754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.055800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.457 [2024-12-10 14:29:51.055817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.760 ms 00:24:26.457 [2024-12-10 14:29:51.055837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.055933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.055949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.457 [2024-12-10 14:29:51.055962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:26.457 [2024-12-10 14:29:51.055974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.056026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.056039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.457 [2024-12-10 14:29:51.056051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:26.457 [2024-12-10 14:29:51.056067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.056105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.056121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.457 [2024-12-10 14:29:51.056133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:26.457 [2024-12-10 14:29:51.056144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.457 [2024-12-10 14:29:51.056185] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.457 [2024-12-10 14:29:51.056199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.457 [2024-12-10 14:29:51.056211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.457 [2024-12-10 14:29:51.056223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:26.458 [2024-12-10 14:29:51.056234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.458 [2024-12-10 14:29:51.091958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.458 [2024-12-10 14:29:51.092006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.458 [2024-12-10 14:29:51.092023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.757 ms 00:24:26.458 [2024-12-10 14:29:51.092035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.458 [2024-12-10 14:29:51.092165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.458 [2024-12-10 14:29:51.092180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.458 [2024-12-10 14:29:51.092193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:26.458 [2024-12-10 14:29:51.092204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.458 [2024-12-10 14:29:51.093329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.458 [2024-12-10 14:29:51.097555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.752 ms, result 0 00:24:26.458 [2024-12-10 14:29:51.098316] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.458 [2024-12-10 14:29:51.116615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.396  [2024-12-10T14:29:53.608Z] Copying: 24/256 [MB] (24 MBps) [2024-12-10T14:29:54.544Z] Copying: 47/256 [MB] (22 MBps) [2024-12-10T14:29:55.482Z] Copying: 69/256 [MB] (22 MBps) [2024-12-10T14:29:56.458Z] Copying: 91/256 [MB] (22 MBps) [2024-12-10T14:29:57.396Z] Copying: 114/256 [MB] (22 MBps) [2024-12-10T14:29:58.333Z] Copying: 136/256 [MB] (22 MBps) [2024-12-10T14:29:59.271Z] Copying: 159/256 [MB] (23 MBps) [2024-12-10T14:30:00.208Z] Copying: 184/256 [MB] (24 MBps) [2024-12-10T14:30:01.585Z] Copying: 208/256 [MB] (24 MBps) [2024-12-10T14:30:02.152Z] Copying: 232/256 [MB] (24 MBps) [2024-12-10T14:30:02.722Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-10 14:30:02.427208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:37.888 [2024-12-10 14:30:02.445405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.445465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:37.888 [2024-12-10 14:30:02.445492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:37.888 [2024-12-10 14:30:02.445504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.445536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:37.888 [2024-12-10 14:30:02.450242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.450296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:37.888 [2024-12-10 14:30:02.450310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:24:37.888 [2024-12-10 14:30:02.450321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.450596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.450615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:37.888 [2024-12-10 14:30:02.450627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:24:37.888 [2024-12-10 14:30:02.450639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.453933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.453963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:37.888 [2024-12-10 14:30:02.453975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:24:37.888 [2024-12-10 14:30:02.453986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.459628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.459664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:37.888 [2024-12-10 14:30:02.459684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:24:37.888 [2024-12-10 14:30:02.459695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.497641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.497698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:37.888 [2024-12-10 14:30:02.497713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.916 ms 00:24:37.888 [2024-12-10 14:30:02.497740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.520431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.520470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:37.888 [2024-12-10 14:30:02.520508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.658 ms 00:24:37.888 [2024-12-10 14:30:02.520519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.888 [2024-12-10 14:30:02.520685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.888 [2024-12-10 14:30:02.520701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:37.889 [2024-12-10 14:30:02.520726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:37.889 [2024-12-10 14:30:02.520736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.889 [2024-12-10 14:30:02.555900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.889 [2024-12-10 14:30:02.555954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:37.889 [2024-12-10 14:30:02.555967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.201 ms 00:24:37.889 [2024-12-10 14:30:02.555977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.889 [2024-12-10 14:30:02.591006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.889 [2024-12-10 14:30:02.591040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:37.889 [2024-12-10 14:30:02.591054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.014 ms 00:24:37.889 [2024-12-10 14:30:02.591063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.889 [2024-12-10 14:30:02.625885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.889 [2024-12-10 14:30:02.625919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:37.889 [2024-12-10 14:30:02.625933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.807 ms 00:24:37.889 [2024-12-10 14:30:02.625959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.889 [2024-12-10 14:30:02.659936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.889 [2024-12-10 14:30:02.659978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:37.889 [2024-12-10 14:30:02.659991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.948 ms 00:24:37.889 [2024-12-10 14:30:02.660001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.889 [2024-12-10 14:30:02.660071] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:37.889 [2024-12-10 14:30:02.660089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:37.889 [2024-12-10 14:30:02.660949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.660959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.660970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.660980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.660991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:37.890 [2024-12-10 14:30:02.661217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:37.890 [2024-12-10 14:30:02.661228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1fe60d2-6936-421b-b05c-2884051c2feb 00:24:37.890 [2024-12-10 14:30:02.661239] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:37.890 [2024-12-10 14:30:02.661249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:37.890 [2024-12-10 14:30:02.661259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:37.890 [2024-12-10 14:30:02.661270] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:37.890 [2024-12-10 14:30:02.661281] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:37.890 [2024-12-10 14:30:02.661291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:37.890 [2024-12-10 14:30:02.661307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:37.890 [2024-12-10 14:30:02.661316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:37.890 [2024-12-10 14:30:02.661325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:37.890 [2024-12-10 14:30:02.661335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.890 [2024-12-10 14:30:02.661346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:37.890 [2024-12-10 14:30:02.661358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:24:37.890 [2024-12-10 14:30:02.661368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.890 [2024-12-10 14:30:02.681863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.890 [2024-12-10 14:30:02.681900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:37.890 [2024-12-10 14:30:02.681913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.507 ms 00:24:37.890 [2024-12-10 14:30:02.681924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.890 [2024-12-10 14:30:02.682582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.890 [2024-12-10 14:30:02.682605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:37.890 [2024-12-10 14:30:02.682617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:24:37.890 [2024-12-10 14:30:02.682627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.149 [2024-12-10 14:30:02.739422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.739463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:38.150 [2024-12-10 14:30:02.739477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.739494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.739594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.739606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:38.150 [2024-12-10 14:30:02.739617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.739628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.739690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.739714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:38.150 [2024-12-10 14:30:02.739726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.739736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.739761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.739772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:38.150 [2024-12-10 14:30:02.739782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.739793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.872879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.872934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:38.150 [2024-12-10 14:30:02.872951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.872978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:38.150 [2024-12-10 14:30:02.977237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:38.150 [2024-12-10 14:30:02.977382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:38.150 [2024-12-10 14:30:02.977465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:38.150 [2024-12-10 14:30:02.977629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:38.150 [2024-12-10 14:30:02.977727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:38.150 [2024-12-10 14:30:02.977809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.977870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.150 [2024-12-10 14:30:02.977887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:38.150 [2024-12-10 14:30:02.977898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.150 [2024-12-10 14:30:02.977909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.150 [2024-12-10 14:30:02.978078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.538 ms, result 0 00:24:39.529 00:24:39.529 00:24:39.529 14:30:04 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.788 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:39.788 14:30:04 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:39.788 14:30:04 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:39.788 14:30:04 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.788 14:30:04 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.788 14:30:04 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:40.048 14:30:04 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:40.048 14:30:04 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80070 00:24:40.048 14:30:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80070 ']' 00:24:40.048 14:30:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80070 00:24:40.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80070) - No such process 00:24:40.048 Process with pid 80070 is not found 00:24:40.048 14:30:04 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 80070 is not found' 00:24:40.048 00:24:40.048 real 1m14.154s 00:24:40.048 user 1m39.015s 00:24:40.048 sys 0m7.658s 00:24:40.048 14:30:04 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.048 ************************************ 00:24:40.048 END TEST ftl_trim 00:24:40.048 ************************************ 00:24:40.048 14:30:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:40.048 14:30:04 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:40.048 14:30:04 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:40.048 14:30:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.048 14:30:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:40.048 ************************************ 00:24:40.048 START TEST ftl_restore 00:24:40.048 ************************************ 00:24:40.048 14:30:04 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:40.317 * Looking for test storage... 00:24:40.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.317 14:30:04 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:40.317 14:30:04 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:24:40.317 14:30:04 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:40.317 14:30:04 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.317 14:30:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.317 14:30:05 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:40.317 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.317 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.317 --rc genhtml_branch_coverage=1 00:24:40.317 --rc genhtml_function_coverage=1 00:24:40.317 --rc genhtml_legend=1 00:24:40.317 --rc geninfo_all_blocks=1 00:24:40.317 --rc geninfo_unexecuted_blocks=1 00:24:40.317 00:24:40.317 ' 00:24:40.317 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.317 --rc genhtml_branch_coverage=1 00:24:40.317 --rc genhtml_function_coverage=1 00:24:40.317 --rc genhtml_legend=1 00:24:40.317 --rc geninfo_all_blocks=1 00:24:40.317 --rc geninfo_unexecuted_blocks=1 00:24:40.317 00:24:40.317 ' 00:24:40.317 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.317 --rc genhtml_branch_coverage=1 00:24:40.317 --rc genhtml_function_coverage=1 00:24:40.317 --rc genhtml_legend=1 00:24:40.317 --rc geninfo_all_blocks=1 00:24:40.317 --rc geninfo_unexecuted_blocks=1 00:24:40.317 00:24:40.317 ' 00:24:40.317 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.317 --rc genhtml_branch_coverage=1 00:24:40.317 --rc genhtml_function_coverage=1 00:24:40.317 --rc genhtml_legend=1 00:24:40.317 --rc geninfo_all_blocks=1 00:24:40.317 --rc geninfo_unexecuted_blocks=1 00:24:40.317 00:24:40.317 ' 00:24:40.317 14:30:05 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:40.317 14:30:05 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:40.317 14:30:05 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.PERKJZrIXL 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80355 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80355 00:24:40.318 14:30:05 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 80355 ']' 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.318 14:30:05 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:40.581 [2024-12-10 14:30:05.172531] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:24:40.581 [2024-12-10 14:30:05.172684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80355 ] 00:24:40.581 [2024-12-10 14:30:05.356889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.839 [2024-12-10 14:30:05.495576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.777 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.777 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:41.777 14:30:06 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:42.036 14:30:06 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:42.036 14:30:06 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:42.036 14:30:06 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:42.036 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:42.036 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.036 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:42.036 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:42.036 14:30:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.295 { 00:24:42.295 "name": "nvme0n1", 00:24:42.295 "aliases": [ 00:24:42.295 "57444d8a-3821-4860-81b0-d940dffcefa1" 00:24:42.295 ], 00:24:42.295 "product_name": "NVMe disk", 00:24:42.295 "block_size": 4096, 00:24:42.295 "num_blocks": 1310720, 00:24:42.295 "uuid": "57444d8a-3821-4860-81b0-d940dffcefa1", 00:24:42.295 "numa_id": -1, 00:24:42.295 "assigned_rate_limits": { 00:24:42.295 "rw_ios_per_sec": 0, 00:24:42.295 "rw_mbytes_per_sec": 0, 00:24:42.295 "r_mbytes_per_sec": 0, 00:24:42.295 "w_mbytes_per_sec": 0 00:24:42.295 }, 00:24:42.295 "claimed": true, 00:24:42.295 "claim_type": "read_many_write_one", 00:24:42.295 "zoned": false, 00:24:42.295 "supported_io_types": { 00:24:42.295 "read": true, 00:24:42.295 "write": true, 00:24:42.295 "unmap": true, 00:24:42.295 "flush": true, 00:24:42.295 "reset": true, 00:24:42.295 "nvme_admin": true, 00:24:42.295 "nvme_io": true, 00:24:42.295 "nvme_io_md": false, 00:24:42.295 "write_zeroes": true, 00:24:42.295 "zcopy": false, 00:24:42.295 "get_zone_info": false, 00:24:42.295 "zone_management": false, 00:24:42.295 "zone_append": false, 00:24:42.295 "compare": true, 00:24:42.295 "compare_and_write": false, 00:24:42.295 "abort": true, 00:24:42.295 "seek_hole": false, 00:24:42.295 "seek_data": false, 00:24:42.295 "copy": true, 00:24:42.295 "nvme_iov_md": false 00:24:42.295 }, 00:24:42.295 "driver_specific": { 00:24:42.295 "nvme": [ 00:24:42.295 { 00:24:42.295 "pci_address": "0000:00:11.0", 00:24:42.295 "trid": { 00:24:42.295 "trtype": "PCIe", 00:24:42.295 "traddr": "0000:00:11.0" 00:24:42.295 }, 00:24:42.295 "ctrlr_data": { 00:24:42.295 "cntlid": 0, 00:24:42.295 "vendor_id": "0x1b36", 00:24:42.295 "model_number": "QEMU NVMe Ctrl", 00:24:42.295 "serial_number": "12341", 00:24:42.295 "firmware_revision": "8.0.0", 00:24:42.295 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:42.295 "oacs": { 00:24:42.295 "security": 0, 00:24:42.295 "format": 1, 00:24:42.295 "firmware": 0, 00:24:42.295 "ns_manage": 1 00:24:42.295 }, 00:24:42.295 "multi_ctrlr": false, 00:24:42.295 "ana_reporting": false 00:24:42.295 }, 00:24:42.295 "vs": { 00:24:42.295 "nvme_version": "1.4" 00:24:42.295 }, 00:24:42.295 "ns_data": { 00:24:42.295 "id": 1, 00:24:42.295 "can_share": false 00:24:42.295 } 00:24:42.295 } 00:24:42.295 ], 00:24:42.295 "mp_policy": "active_passive" 00:24:42.295 } 00:24:42.295 } 00:24:42.295 ]' 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:42.295 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:42.295 14:30:07 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:42.295 14:30:07 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:42.295 14:30:07 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:42.295 14:30:07 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:42.296 14:30:07 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:42.554 14:30:07 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=50e26e82-a171-4d19-82ed-098b8e2f3c0c 00:24:42.554 14:30:07 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:42.554 14:30:07 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50e26e82-a171-4d19-82ed-098b8e2f3c0c 00:24:42.813 14:30:07 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:43.079 14:30:07 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=966ebfc0-5018-419c-bdae-ee404747839b 00:24:43.079 14:30:07 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 966ebfc0-5018-419c-bdae-ee404747839b 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=cd060754-0caf-43d9-9569-f9941545875f 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cd060754-0caf-43d9-9569-f9941545875f 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=cd060754-0caf-43d9-9569-f9941545875f 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:43.340 14:30:07 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size cd060754-0caf-43d9-9569-f9941545875f 00:24:43.340 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd060754-0caf-43d9-9569-f9941545875f 00:24:43.340 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.340 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:43.340 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:43.340 14:30:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd060754-0caf-43d9-9569-f9941545875f 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.599 { 00:24:43.599 "name": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:43.599 "aliases": [ 00:24:43.599 "lvs/nvme0n1p0" 00:24:43.599 ], 00:24:43.599 "product_name": "Logical Volume", 00:24:43.599 "block_size": 4096, 00:24:43.599 "num_blocks": 26476544, 00:24:43.599 "uuid": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:43.599 "assigned_rate_limits": { 00:24:43.599 "rw_ios_per_sec": 0, 00:24:43.599 "rw_mbytes_per_sec": 0, 00:24:43.599 "r_mbytes_per_sec": 0, 00:24:43.599 "w_mbytes_per_sec": 0 00:24:43.599 }, 00:24:43.599 "claimed": false, 00:24:43.599 "zoned": false, 00:24:43.599 "supported_io_types": { 00:24:43.599 "read": true, 00:24:43.599 "write": true, 00:24:43.599 "unmap": true, 00:24:43.599 "flush": false, 00:24:43.599 "reset": true, 00:24:43.599 "nvme_admin": false, 00:24:43.599 "nvme_io": false, 00:24:43.599 "nvme_io_md": false, 00:24:43.599 "write_zeroes": true, 00:24:43.599 "zcopy": false, 00:24:43.599 "get_zone_info": false, 00:24:43.599 "zone_management": false, 00:24:43.599 "zone_append": false, 00:24:43.599 "compare": false, 00:24:43.599 "compare_and_write": false, 00:24:43.599 "abort": false, 00:24:43.599 "seek_hole": true, 00:24:43.599 "seek_data": true, 00:24:43.599 "copy": false, 00:24:43.599 "nvme_iov_md": false 00:24:43.599 }, 00:24:43.599 "driver_specific": { 00:24:43.599 "lvol": { 00:24:43.599 "lvol_store_uuid": "966ebfc0-5018-419c-bdae-ee404747839b", 00:24:43.599 "base_bdev": "nvme0n1", 00:24:43.599 "thin_provision": true, 00:24:43.599 "num_allocated_clusters": 0, 00:24:43.599 "snapshot": false, 00:24:43.599 "clone": false, 00:24:43.599 "esnap_clone": false 00:24:43.599 } 00:24:43.599 } 00:24:43.599 } 00:24:43.599 ]' 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.599 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.599 14:30:08 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:43.599 14:30:08 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:43.599 14:30:08 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:43.858 14:30:08 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:43.858 14:30:08 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:43.858 14:30:08 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size cd060754-0caf-43d9-9569-f9941545875f 00:24:43.858 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd060754-0caf-43d9-9569-f9941545875f 00:24:43.858 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.858 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:43.858 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:43.858 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd060754-0caf-43d9-9569-f9941545875f 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:44.117 { 00:24:44.117 "name": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:44.117 "aliases": [ 00:24:44.117 "lvs/nvme0n1p0" 00:24:44.117 ], 00:24:44.117 "product_name": "Logical Volume", 00:24:44.117 "block_size": 4096, 00:24:44.117 "num_blocks": 26476544, 00:24:44.117 "uuid": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:44.117 "assigned_rate_limits": { 00:24:44.117 "rw_ios_per_sec": 0, 00:24:44.117 "rw_mbytes_per_sec": 0, 00:24:44.117 "r_mbytes_per_sec": 0, 00:24:44.117 "w_mbytes_per_sec": 0 00:24:44.117 }, 00:24:44.117 "claimed": false, 00:24:44.117 "zoned": false, 00:24:44.117 "supported_io_types": { 00:24:44.117 "read": true, 00:24:44.117 "write": true, 00:24:44.117 "unmap": true, 00:24:44.117 "flush": false, 00:24:44.117 "reset": true, 00:24:44.117 "nvme_admin": false, 00:24:44.117 "nvme_io": false, 00:24:44.117 "nvme_io_md": false, 00:24:44.117 "write_zeroes": true, 00:24:44.117 "zcopy": false, 00:24:44.117 "get_zone_info": false, 00:24:44.117 "zone_management": false, 00:24:44.117 "zone_append": false, 00:24:44.117 "compare": false, 00:24:44.117 "compare_and_write": false, 00:24:44.117 "abort": false, 00:24:44.117 "seek_hole": true, 00:24:44.117 "seek_data": true, 00:24:44.117 "copy": false, 00:24:44.117 "nvme_iov_md": false 00:24:44.117 }, 00:24:44.117 "driver_specific": { 00:24:44.117 "lvol": { 00:24:44.117 "lvol_store_uuid": "966ebfc0-5018-419c-bdae-ee404747839b", 00:24:44.117 "base_bdev": "nvme0n1", 00:24:44.117 "thin_provision": true, 00:24:44.117 "num_allocated_clusters": 0, 00:24:44.117 "snapshot": false, 00:24:44.117 "clone": false, 00:24:44.117 "esnap_clone": false 00:24:44.117 } 00:24:44.117 } 00:24:44.117 } 00:24:44.117 ]' 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:44.117 14:30:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:44.117 14:30:08 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:44.117 14:30:08 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:44.376 14:30:09 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:44.376 14:30:09 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size cd060754-0caf-43d9-9569-f9941545875f 00:24:44.376 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cd060754-0caf-43d9-9569-f9941545875f 00:24:44.376 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:44.376 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:44.376 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:44.376 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd060754-0caf-43d9-9569-f9941545875f 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:44.636 { 00:24:44.636 "name": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:44.636 "aliases": [ 00:24:44.636 "lvs/nvme0n1p0" 00:24:44.636 ], 00:24:44.636 "product_name": "Logical Volume", 00:24:44.636 "block_size": 4096, 00:24:44.636 "num_blocks": 26476544, 00:24:44.636 "uuid": "cd060754-0caf-43d9-9569-f9941545875f", 00:24:44.636 "assigned_rate_limits": { 00:24:44.636 "rw_ios_per_sec": 0, 00:24:44.636 "rw_mbytes_per_sec": 0, 00:24:44.636 "r_mbytes_per_sec": 0, 00:24:44.636 "w_mbytes_per_sec": 0 00:24:44.636 }, 00:24:44.636 "claimed": false, 00:24:44.636 "zoned": false, 00:24:44.636 "supported_io_types": { 00:24:44.636 "read": true, 00:24:44.636 "write": true, 00:24:44.636 "unmap": true, 00:24:44.636 "flush": false, 00:24:44.636 "reset": true, 00:24:44.636 "nvme_admin": false, 00:24:44.636 "nvme_io": false, 00:24:44.636 "nvme_io_md": false, 00:24:44.636 "write_zeroes": true, 00:24:44.636 "zcopy": false, 00:24:44.636 "get_zone_info": false, 00:24:44.636 "zone_management": false, 00:24:44.636 "zone_append": false, 00:24:44.636 "compare": false, 00:24:44.636 "compare_and_write": false, 00:24:44.636 "abort": false, 00:24:44.636 "seek_hole": true, 00:24:44.636 "seek_data": true, 00:24:44.636 "copy": false, 00:24:44.636 "nvme_iov_md": false 00:24:44.636 }, 00:24:44.636 "driver_specific": { 00:24:44.636 "lvol": { 00:24:44.636 "lvol_store_uuid": "966ebfc0-5018-419c-bdae-ee404747839b", 00:24:44.636 "base_bdev": "nvme0n1", 00:24:44.636 "thin_provision": true, 00:24:44.636 "num_allocated_clusters": 0, 00:24:44.636 "snapshot": false, 00:24:44.636 "clone": false, 00:24:44.636 "esnap_clone": false 00:24:44.636 } 00:24:44.636 } 00:24:44.636 } 00:24:44.636 ]' 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:44.636 14:30:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cd060754-0caf-43d9-9569-f9941545875f --l2p_dram_limit 10' 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:44.636 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:44.636 14:30:09 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cd060754-0caf-43d9-9569-f9941545875f --l2p_dram_limit 10 -c nvc0n1p0 00:24:44.896 [2024-12-10 14:30:09.539944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.540005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:44.896 [2024-12-10 14:30:09.540026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:44.896 [2024-12-10 14:30:09.540039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.540092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.540107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.896 [2024-12-10 14:30:09.540121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:44.896 [2024-12-10 14:30:09.540133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.540165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:44.896 [2024-12-10 14:30:09.541095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:44.896 [2024-12-10 14:30:09.541137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.541150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.896 [2024-12-10 14:30:09.541166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:24:44.896 [2024-12-10 14:30:09.541180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.541255] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:24:44.896 [2024-12-10 14:30:09.542693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.542737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:44.896 [2024-12-10 14:30:09.542751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:44.896 [2024-12-10 14:30:09.542767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.550370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.550412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.896 [2024-12-10 14:30:09.550425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.570 ms 00:24:44.896 [2024-12-10 14:30:09.550439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.550533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.550554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.896 [2024-12-10 14:30:09.550566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:44.896 [2024-12-10 14:30:09.550585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.550647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.896 [2024-12-10 14:30:09.550665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:44.896 [2024-12-10 14:30:09.550693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:44.896 [2024-12-10 14:30:09.550708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.896 [2024-12-10 14:30:09.550734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.896 [2024-12-10 14:30:09.555706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.897 [2024-12-10 14:30:09.555742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.897 [2024-12-10 14:30:09.555760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.982 ms 00:24:44.897 [2024-12-10 14:30:09.555773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.897 [2024-12-10 14:30:09.555813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.897 [2024-12-10 14:30:09.555827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:44.897 [2024-12-10 14:30:09.555842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:44.897 [2024-12-10 14:30:09.555853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.897 [2024-12-10 14:30:09.555893] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:44.897 [2024-12-10 14:30:09.556019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:44.897 [2024-12-10 14:30:09.556042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:44.897 [2024-12-10 14:30:09.556057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:44.897 [2024-12-10 14:30:09.556075] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556090] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:44.897 [2024-12-10 14:30:09.556117] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:44.897 [2024-12-10 14:30:09.556136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:44.897 [2024-12-10 14:30:09.556148] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:44.897 [2024-12-10 14:30:09.556162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.897 [2024-12-10 14:30:09.556185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:44.897 [2024-12-10 14:30:09.556201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:24:44.897 [2024-12-10 14:30:09.556213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.897 [2024-12-10 14:30:09.556288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.897 [2024-12-10 14:30:09.556300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:44.897 [2024-12-10 14:30:09.556316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:44.897 [2024-12-10 14:30:09.556327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.897 [2024-12-10 14:30:09.556423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:44.897 [2024-12-10 14:30:09.556447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:44.897 [2024-12-10 14:30:09.556463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:44.897 [2024-12-10 14:30:09.556502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:44.897 [2024-12-10 14:30:09.556541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.897 [2024-12-10 14:30:09.556568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:44.897 [2024-12-10 14:30:09.556578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:44.897 [2024-12-10 14:30:09.556592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.897 [2024-12-10 14:30:09.556603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:44.897 [2024-12-10 14:30:09.556616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:44.897 [2024-12-10 14:30:09.556626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:44.897 [2024-12-10 14:30:09.556653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:44.897 [2024-12-10 14:30:09.556704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:44.897 [2024-12-10 14:30:09.556740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:44.897 [2024-12-10 14:30:09.556779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:44.897 [2024-12-10 14:30:09.556816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.897 [2024-12-10 14:30:09.556839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:44.897 [2024-12-10 14:30:09.556855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.897 [2024-12-10 14:30:09.556878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:44.897 [2024-12-10 14:30:09.556889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:44.897 [2024-12-10 14:30:09.556903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.897 [2024-12-10 14:30:09.556913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:44.897 [2024-12-10 14:30:09.556927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:44.897 [2024-12-10 14:30:09.556937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:44.897 [2024-12-10 14:30:09.556962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:44.897 [2024-12-10 14:30:09.556975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.556985] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:44.897 [2024-12-10 14:30:09.556999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:44.897 [2024-12-10 14:30:09.557010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.897 [2024-12-10 14:30:09.557024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.897 [2024-12-10 14:30:09.557036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:44.897 [2024-12-10 14:30:09.557053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:44.897 [2024-12-10 14:30:09.557063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:44.897 [2024-12-10 14:30:09.557077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:44.897 [2024-12-10 14:30:09.557088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:44.897 [2024-12-10 14:30:09.557101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:44.897 [2024-12-10 14:30:09.557114] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:44.897 [2024-12-10 14:30:09.557135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:44.897 [2024-12-10 14:30:09.557161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:44.897 [2024-12-10 14:30:09.557173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:44.897 [2024-12-10 14:30:09.557198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:44.897 [2024-12-10 14:30:09.557211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:44.897 [2024-12-10 14:30:09.557225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:44.897 [2024-12-10 14:30:09.557236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:44.897 [2024-12-10 14:30:09.557252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:44.897 [2024-12-10 14:30:09.557264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:44.897 [2024-12-10 14:30:09.557281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:44.897 [2024-12-10 14:30:09.557346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:44.897 [2024-12-10 14:30:09.557361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.897 [2024-12-10 14:30:09.557388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:44.897 [2024-12-10 14:30:09.557400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:44.897 [2024-12-10 14:30:09.557415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:44.897 [2024-12-10 14:30:09.557427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.897 [2024-12-10 14:30:09.557441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:44.898 [2024-12-10 14:30:09.557463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:24:44.898 [2024-12-10 14:30:09.557477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.898 [2024-12-10 14:30:09.557520] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:44.898 [2024-12-10 14:30:09.557539] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:51.470 [2024-12-10 14:30:16.059861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.470 [2024-12-10 14:30:16.059929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:51.470 [2024-12-10 14:30:16.059948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6512.899 ms 00:24:51.470 [2024-12-10 14:30:16.059964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.099307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.099370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.471 [2024-12-10 14:30:16.099388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.028 ms 00:24:51.471 [2024-12-10 14:30:16.099404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.099540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.099561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.471 [2024-12-10 14:30:16.099574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:51.471 [2024-12-10 14:30:16.099596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.147137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.147192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.471 [2024-12-10 14:30:16.147208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.572 ms 00:24:51.471 [2024-12-10 14:30:16.147222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.147255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.147277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.471 [2024-12-10 14:30:16.147290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:51.471 [2024-12-10 14:30:16.147317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.147812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.147842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.471 [2024-12-10 14:30:16.147855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:24:51.471 [2024-12-10 14:30:16.147870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.147966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.147982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.471 [2024-12-10 14:30:16.147998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:51.471 [2024-12-10 14:30:16.148015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.168810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.168861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.471 [2024-12-10 14:30:16.168875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.805 ms 00:24:51.471 [2024-12-10 14:30:16.168892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.471 [2024-12-10 14:30:16.204592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.471 [2024-12-10 14:30:16.208695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.471 [2024-12-10 14:30:16.208737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.471 [2024-12-10 14:30:16.208758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.762 ms 00:24:51.471 [2024-12-10 14:30:16.208772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.730 [2024-12-10 14:30:16.395369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.730 [2024-12-10 14:30:16.395415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:51.730 [2024-12-10 14:30:16.395435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 186.858 ms 00:24:51.730 [2024-12-10 14:30:16.395447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.730 [2024-12-10 14:30:16.395600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.730 [2024-12-10 14:30:16.395617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.730 [2024-12-10 14:30:16.395636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:51.730 [2024-12-10 14:30:16.395647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.730 [2024-12-10 14:30:16.430554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.730 [2024-12-10 14:30:16.430598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:51.730 [2024-12-10 14:30:16.430617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.883 ms 00:24:51.730 [2024-12-10 14:30:16.430630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.731 [2024-12-10 14:30:16.464513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.731 [2024-12-10 14:30:16.464557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:51.731 [2024-12-10 14:30:16.464577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.898 ms 00:24:51.731 [2024-12-10 14:30:16.464589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.731 [2024-12-10 14:30:16.465269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.731 [2024-12-10 14:30:16.465296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.731 [2024-12-10 14:30:16.465313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:24:51.731 [2024-12-10 14:30:16.465328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.602356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.602400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:51.990 [2024-12-10 14:30:16.602422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.188 ms 00:24:51.990 [2024-12-10 14:30:16.602434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.639021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.639068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:51.990 [2024-12-10 14:30:16.639088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.572 ms 00:24:51.990 [2024-12-10 14:30:16.639100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.673840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.673884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:51.990 [2024-12-10 14:30:16.673903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.761 ms 00:24:51.990 [2024-12-10 14:30:16.673914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.708221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.708269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.990 [2024-12-10 14:30:16.708288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.327 ms 00:24:51.990 [2024-12-10 14:30:16.708299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.708334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.708347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.990 [2024-12-10 14:30:16.708365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:51.990 [2024-12-10 14:30:16.708377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.708475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.990 [2024-12-10 14:30:16.708493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.990 [2024-12-10 14:30:16.708508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:51.990 [2024-12-10 14:30:16.708520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.990 [2024-12-10 14:30:16.709637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 7180.913 ms, result 0 00:24:51.990 { 00:24:51.990 "name": "ftl0", 00:24:51.990 "uuid": "f8b0d425-e333-4ad6-90c7-15392a67b1ea" 00:24:51.990 } 00:24:51.990 14:30:16 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:51.990 14:30:16 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:52.250 14:30:16 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:52.250 14:30:16 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:52.509 [2024-12-10 14:30:17.148184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.509 [2024-12-10 14:30:17.148241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:52.509 [2024-12-10 14:30:17.148256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:52.509 [2024-12-10 14:30:17.148271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.509 [2024-12-10 14:30:17.148297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:52.509 [2024-12-10 14:30:17.152125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.509 [2024-12-10 14:30:17.152166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:52.509 [2024-12-10 14:30:17.152183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:24:52.509 [2024-12-10 14:30:17.152195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.509 [2024-12-10 14:30:17.152427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.509 [2024-12-10 14:30:17.152446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:52.509 [2024-12-10 14:30:17.152462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:24:52.509 [2024-12-10 14:30:17.152473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.509 [2024-12-10 14:30:17.154804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.509 [2024-12-10 14:30:17.154835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:52.509 [2024-12-10 14:30:17.154851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:24:52.509 [2024-12-10 14:30:17.154863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.509 [2024-12-10 14:30:17.159537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.159577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:52.510 [2024-12-10 14:30:17.159598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:24:52.510 [2024-12-10 14:30:17.159610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.192944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.192989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:52.510 [2024-12-10 14:30:17.193007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.299 ms 00:24:52.510 [2024-12-10 14:30:17.193019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.213925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.213969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:52.510 [2024-12-10 14:30:17.213987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.887 ms 00:24:52.510 [2024-12-10 14:30:17.213999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.214150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.214167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:52.510 [2024-12-10 14:30:17.214184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:52.510 [2024-12-10 14:30:17.214196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.248797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.248841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:52.510 [2024-12-10 14:30:17.248860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.625 ms 00:24:52.510 [2024-12-10 14:30:17.248872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.281998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.282043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:52.510 [2024-12-10 14:30:17.282061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.126 ms 00:24:52.510 [2024-12-10 14:30:17.282072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.510 [2024-12-10 14:30:17.314948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.510 [2024-12-10 14:30:17.314992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:52.510 [2024-12-10 14:30:17.315010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.876 ms 00:24:52.510 [2024-12-10 14:30:17.315021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.770 [2024-12-10 14:30:17.348029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.770 [2024-12-10 14:30:17.348072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:52.770 [2024-12-10 14:30:17.348091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.957 ms 00:24:52.770 [2024-12-10 14:30:17.348103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.770 [2024-12-10 14:30:17.348150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:52.770 [2024-12-10 14:30:17.348168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:52.770 [2024-12-10 14:30:17.348318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.348994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:52.771 [2024-12-10 14:30:17.349743] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:52.772 [2024-12-10 14:30:17.349757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:24:52.772 [2024-12-10 14:30:17.349769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:52.772 [2024-12-10 14:30:17.349785] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:52.772 [2024-12-10 14:30:17.349799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:52.772 [2024-12-10 14:30:17.349813] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:52.772 [2024-12-10 14:30:17.349825] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:52.772 [2024-12-10 14:30:17.349839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:52.772 [2024-12-10 14:30:17.349851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:52.772 [2024-12-10 14:30:17.349865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:52.772 [2024-12-10 14:30:17.349875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:52.772 [2024-12-10 14:30:17.349889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.772 [2024-12-10 14:30:17.349901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:52.772 [2024-12-10 14:30:17.349916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:24:52.772 [2024-12-10 14:30:17.349931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.367924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.772 [2024-12-10 14:30:17.367963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:52.772 [2024-12-10 14:30:17.367980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.960 ms 00:24:52.772 [2024-12-10 14:30:17.367992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.368525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.772 [2024-12-10 14:30:17.368551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:52.772 [2024-12-10 14:30:17.368571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:24:52.772 [2024-12-10 14:30:17.368583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.428487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.772 [2024-12-10 14:30:17.428528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.772 [2024-12-10 14:30:17.428545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.772 [2024-12-10 14:30:17.428558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.428612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.772 [2024-12-10 14:30:17.428624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.772 [2024-12-10 14:30:17.428643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.772 [2024-12-10 14:30:17.428655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.428763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.772 [2024-12-10 14:30:17.428779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.772 [2024-12-10 14:30:17.428794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.772 [2024-12-10 14:30:17.428805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.428833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.772 [2024-12-10 14:30:17.428845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.772 [2024-12-10 14:30:17.428859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.772 [2024-12-10 14:30:17.428874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.772 [2024-12-10 14:30:17.543503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:52.772 [2024-12-10 14:30:17.543558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.772 [2024-12-10 14:30:17.543577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:52.772 [2024-12-10 14:30:17.543590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.638961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.032 [2024-12-10 14:30:17.639032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.032 [2024-12-10 14:30:17.639190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.032 [2024-12-10 14:30:17.639293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.032 [2024-12-10 14:30:17.639457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:53.032 [2024-12-10 14:30:17.639542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.032 [2024-12-10 14:30:17.639625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.032 [2024-12-10 14:30:17.639728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.032 [2024-12-10 14:30:17.639743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.032 [2024-12-10 14:30:17.639755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.032 [2024-12-10 14:30:17.639893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.472 ms, result 0 00:24:53.032 true 00:24:53.032 14:30:17 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80355 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80355 ']' 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80355 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80355 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:53.032 killing process with pid 80355 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80355' 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 80355 00:24:53.032 14:30:17 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 80355 00:24:58.335 14:30:22 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:01.653 262144+0 records in 00:25:01.653 262144+0 records out 00:25:01.653 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01755 s, 267 MB/s 00:25:01.653 14:30:26 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:03.559 14:30:28 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:03.559 [2024-12-10 14:30:28.265918] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:25:03.559 [2024-12-10 14:30:28.266070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80625 ] 00:25:03.819 [2024-12-10 14:30:28.458896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.819 [2024-12-10 14:30:28.567462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.388 [2024-12-10 14:30:28.940636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:04.388 [2024-12-10 14:30:28.940724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:04.388 [2024-12-10 14:30:29.107161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.107221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:04.388 [2024-12-10 14:30:29.107238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.388 [2024-12-10 14:30:29.107249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.107298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.107314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.388 [2024-12-10 14:30:29.107326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:04.388 [2024-12-10 14:30:29.107337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.107361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:04.388 [2024-12-10 14:30:29.108215] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:04.388 [2024-12-10 14:30:29.108248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.108260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.388 [2024-12-10 14:30:29.108272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:25:04.388 [2024-12-10 14:30:29.108284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.109748] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:04.388 [2024-12-10 14:30:29.128105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.128152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:04.388 [2024-12-10 14:30:29.128168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.387 ms 00:25:04.388 [2024-12-10 14:30:29.128179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.128253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.128268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:04.388 [2024-12-10 14:30:29.128281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:04.388 [2024-12-10 14:30:29.128293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.135191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.135225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.388 [2024-12-10 14:30:29.135238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.832 ms 00:25:04.388 [2024-12-10 14:30:29.135255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.135355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.135371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.388 [2024-12-10 14:30:29.135383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:04.388 [2024-12-10 14:30:29.135394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.135438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.388 [2024-12-10 14:30:29.135452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:04.388 [2024-12-10 14:30:29.135463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:04.388 [2024-12-10 14:30:29.135474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.388 [2024-12-10 14:30:29.135510] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:04.389 [2024-12-10 14:30:29.139941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.389 [2024-12-10 14:30:29.139979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.389 [2024-12-10 14:30:29.140001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:25:04.389 [2024-12-10 14:30:29.140012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.389 [2024-12-10 14:30:29.140051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.389 [2024-12-10 14:30:29.140064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:04.389 [2024-12-10 14:30:29.140076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:04.389 [2024-12-10 14:30:29.140088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.389 [2024-12-10 14:30:29.140142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:04.389 [2024-12-10 14:30:29.140178] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:04.389 [2024-12-10 14:30:29.140214] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:04.389 [2024-12-10 14:30:29.140240] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:04.389 [2024-12-10 14:30:29.140325] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:04.389 [2024-12-10 14:30:29.140340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:04.389 [2024-12-10 14:30:29.140355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:04.389 [2024-12-10 14:30:29.140369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140383] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:04.389 [2024-12-10 14:30:29.140407] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:04.389 [2024-12-10 14:30:29.140426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:04.389 [2024-12-10 14:30:29.140437] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:04.389 [2024-12-10 14:30:29.140449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.389 [2024-12-10 14:30:29.140461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:04.389 [2024-12-10 14:30:29.140473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:25:04.389 [2024-12-10 14:30:29.140484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.389 [2024-12-10 14:30:29.140552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.389 [2024-12-10 14:30:29.140565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:04.389 [2024-12-10 14:30:29.140576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:04.389 [2024-12-10 14:30:29.140587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.389 [2024-12-10 14:30:29.140706] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:04.389 [2024-12-10 14:30:29.140724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:04.389 [2024-12-10 14:30:29.140736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:04.389 [2024-12-10 14:30:29.140770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:04.389 [2024-12-10 14:30:29.140805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.389 [2024-12-10 14:30:29.140827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:04.389 [2024-12-10 14:30:29.140837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:04.389 [2024-12-10 14:30:29.140848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.389 [2024-12-10 14:30:29.140873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:04.389 [2024-12-10 14:30:29.140885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:04.389 [2024-12-10 14:30:29.140896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:04.389 [2024-12-10 14:30:29.140916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:04.389 [2024-12-10 14:30:29.140948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:04.389 [2024-12-10 14:30:29.140977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:04.389 [2024-12-10 14:30:29.140987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.389 [2024-12-10 14:30:29.140997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:04.389 [2024-12-10 14:30:29.141008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.389 [2024-12-10 14:30:29.141027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:04.389 [2024-12-10 14:30:29.141038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.389 [2024-12-10 14:30:29.141058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:04.389 [2024-12-10 14:30:29.141068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.389 [2024-12-10 14:30:29.141088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:04.389 [2024-12-10 14:30:29.141098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:04.389 [2024-12-10 14:30:29.141108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.389 [2024-12-10 14:30:29.141118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:04.389 [2024-12-10 14:30:29.141128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:04.389 [2024-12-10 14:30:29.141138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:04.389 [2024-12-10 14:30:29.141157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:04.389 [2024-12-10 14:30:29.141167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:04.389 [2024-12-10 14:30:29.141188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:04.389 [2024-12-10 14:30:29.141198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.389 [2024-12-10 14:30:29.141209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.389 [2024-12-10 14:30:29.141220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:04.389 [2024-12-10 14:30:29.141230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:04.389 [2024-12-10 14:30:29.141240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:04.389 [2024-12-10 14:30:29.141250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:04.389 [2024-12-10 14:30:29.141259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:04.389 [2024-12-10 14:30:29.141270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:04.389 [2024-12-10 14:30:29.141281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:04.389 [2024-12-10 14:30:29.141294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:04.389 [2024-12-10 14:30:29.141326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:04.389 [2024-12-10 14:30:29.141338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:04.389 [2024-12-10 14:30:29.141350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:04.389 [2024-12-10 14:30:29.141362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:04.389 [2024-12-10 14:30:29.141373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:04.389 [2024-12-10 14:30:29.141386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:04.389 [2024-12-10 14:30:29.141397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:04.389 [2024-12-10 14:30:29.141408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:04.389 [2024-12-10 14:30:29.141419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:04.389 [2024-12-10 14:30:29.141485] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:04.389 [2024-12-10 14:30:29.141498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.389 [2024-12-10 14:30:29.141511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:04.390 [2024-12-10 14:30:29.141523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:04.390 [2024-12-10 14:30:29.141535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:04.390 [2024-12-10 14:30:29.141546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:04.390 [2024-12-10 14:30:29.141558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.390 [2024-12-10 14:30:29.141569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:04.390 [2024-12-10 14:30:29.141581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:25:04.390 [2024-12-10 14:30:29.141592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.390 [2024-12-10 14:30:29.181324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.390 [2024-12-10 14:30:29.181365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.390 [2024-12-10 14:30:29.181380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.742 ms 00:25:04.390 [2024-12-10 14:30:29.181400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.390 [2024-12-10 14:30:29.181482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.390 [2024-12-10 14:30:29.181494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:04.390 [2024-12-10 14:30:29.181508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:04.390 [2024-12-10 14:30:29.181518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.250657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.250707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.650 [2024-12-10 14:30:29.250723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.184 ms 00:25:04.650 [2024-12-10 14:30:29.250735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.250782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.250795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.650 [2024-12-10 14:30:29.250813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:04.650 [2024-12-10 14:30:29.250824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.251322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.251349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.650 [2024-12-10 14:30:29.251362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:25:04.650 [2024-12-10 14:30:29.251373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.251488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.251503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.650 [2024-12-10 14:30:29.251521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:04.650 [2024-12-10 14:30:29.251532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.268970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.269017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.650 [2024-12-10 14:30:29.269033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.442 ms 00:25:04.650 [2024-12-10 14:30:29.269044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.286591] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:04.650 [2024-12-10 14:30:29.286639] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:04.650 [2024-12-10 14:30:29.286656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.286676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:04.650 [2024-12-10 14:30:29.286689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.530 ms 00:25:04.650 [2024-12-10 14:30:29.286700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.315138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.315192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:04.650 [2024-12-10 14:30:29.315207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.436 ms 00:25:04.650 [2024-12-10 14:30:29.315219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.332538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.332583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:04.650 [2024-12-10 14:30:29.332598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:25:04.650 [2024-12-10 14:30:29.332609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.349361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.349402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:04.650 [2024-12-10 14:30:29.349417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.736 ms 00:25:04.650 [2024-12-10 14:30:29.349428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.350085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.350115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:04.650 [2024-12-10 14:30:29.350128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:25:04.650 [2024-12-10 14:30:29.350144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.432695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.432753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:04.650 [2024-12-10 14:30:29.432770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.659 ms 00:25:04.650 [2024-12-10 14:30:29.432790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.442585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:04.650 [2024-12-10 14:30:29.444990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.445027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:04.650 [2024-12-10 14:30:29.445042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.172 ms 00:25:04.650 [2024-12-10 14:30:29.445053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.445129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.445144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:04.650 [2024-12-10 14:30:29.445157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:04.650 [2024-12-10 14:30:29.445169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.445248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.445263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:04.650 [2024-12-10 14:30:29.445275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:04.650 [2024-12-10 14:30:29.445286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.445309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.445321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:04.650 [2024-12-10 14:30:29.445332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:04.650 [2024-12-10 14:30:29.445343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.445381] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:04.650 [2024-12-10 14:30:29.445399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.445411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:04.650 [2024-12-10 14:30:29.445423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:04.650 [2024-12-10 14:30:29.445434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.479219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.479265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.650 [2024-12-10 14:30:29.479281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.807 ms 00:25:04.650 [2024-12-10 14:30:29.479300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.479372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.650 [2024-12-10 14:30:29.479386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.650 [2024-12-10 14:30:29.479400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:04.650 [2024-12-10 14:30:29.479411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.650 [2024-12-10 14:30:29.480550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.584 ms, result 0 00:25:06.028  [2024-12-10T14:30:31.798Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-10T14:30:32.735Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-10T14:30:33.673Z] Copying: 67/1024 [MB] (23 MBps) [2024-12-10T14:30:34.611Z] Copying: 90/1024 [MB] (23 MBps) [2024-12-10T14:30:35.549Z] Copying: 112/1024 [MB] (22 MBps) [2024-12-10T14:30:36.486Z] Copying: 135/1024 [MB] (23 MBps) [2024-12-10T14:30:37.864Z] Copying: 158/1024 [MB] (22 MBps) [2024-12-10T14:30:38.802Z] Copying: 181/1024 [MB] (23 MBps) [2024-12-10T14:30:39.740Z] Copying: 204/1024 [MB] (22 MBps) [2024-12-10T14:30:40.678Z] Copying: 227/1024 [MB] (22 MBps) [2024-12-10T14:30:41.616Z] Copying: 250/1024 [MB] (22 MBps) [2024-12-10T14:30:42.554Z] Copying: 273/1024 [MB] (23 MBps) [2024-12-10T14:30:43.493Z] Copying: 296/1024 [MB] (23 MBps) [2024-12-10T14:30:44.872Z] Copying: 319/1024 [MB] (22 MBps) [2024-12-10T14:30:45.810Z] Copying: 342/1024 [MB] (22 MBps) [2024-12-10T14:30:46.748Z] Copying: 365/1024 [MB] (23 MBps) [2024-12-10T14:30:47.730Z] Copying: 388/1024 [MB] (23 MBps) [2024-12-10T14:30:48.667Z] Copying: 411/1024 [MB] (23 MBps) [2024-12-10T14:30:49.605Z] Copying: 434/1024 [MB] (23 MBps) [2024-12-10T14:30:50.542Z] Copying: 458/1024 [MB] (23 MBps) [2024-12-10T14:30:51.478Z] Copying: 481/1024 [MB] (23 MBps) [2024-12-10T14:30:52.858Z] Copying: 504/1024 [MB] (23 MBps) [2024-12-10T14:30:53.794Z] Copying: 527/1024 [MB] (22 MBps) [2024-12-10T14:30:54.731Z] Copying: 550/1024 [MB] (22 MBps) [2024-12-10T14:30:55.668Z] Copying: 573/1024 [MB] (22 MBps) [2024-12-10T14:30:56.605Z] Copying: 595/1024 [MB] (22 MBps) [2024-12-10T14:30:57.544Z] Copying: 619/1024 [MB] (23 MBps) [2024-12-10T14:30:58.481Z] Copying: 642/1024 [MB] (23 MBps) [2024-12-10T14:30:59.511Z] Copying: 665/1024 [MB] (23 MBps) [2024-12-10T14:31:00.448Z] Copying: 689/1024 [MB] (23 MBps) [2024-12-10T14:31:01.826Z] Copying: 712/1024 [MB] (22 MBps) [2024-12-10T14:31:02.763Z] Copying: 735/1024 [MB] (22 MBps) [2024-12-10T14:31:03.700Z] Copying: 758/1024 [MB] (23 MBps) [2024-12-10T14:31:04.638Z] Copying: 781/1024 [MB] (23 MBps) [2024-12-10T14:31:05.575Z] Copying: 805/1024 [MB] (23 MBps) [2024-12-10T14:31:06.512Z] Copying: 828/1024 [MB] (23 MBps) [2024-12-10T14:31:07.449Z] Copying: 851/1024 [MB] (22 MBps) [2024-12-10T14:31:08.825Z] Copying: 874/1024 [MB] (23 MBps) [2024-12-10T14:31:09.762Z] Copying: 897/1024 [MB] (23 MBps) [2024-12-10T14:31:10.698Z] Copying: 921/1024 [MB] (23 MBps) [2024-12-10T14:31:11.636Z] Copying: 944/1024 [MB] (23 MBps) [2024-12-10T14:31:12.574Z] Copying: 967/1024 [MB] (23 MBps) [2024-12-10T14:31:13.511Z] Copying: 990/1024 [MB] (23 MBps) [2024-12-10T14:31:14.080Z] Copying: 1014/1024 [MB] (23 MBps) [2024-12-10T14:31:14.080Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 14:31:13.836064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.246 [2024-12-10 14:31:13.836115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:49.246 [2024-12-10 14:31:13.836134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:49.246 [2024-12-10 14:31:13.836145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.246 [2024-12-10 14:31:13.836170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:49.246 [2024-12-10 14:31:13.840340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.246 [2024-12-10 14:31:13.840382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:49.246 [2024-12-10 14:31:13.840405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.159 ms 00:25:49.246 [2024-12-10 14:31:13.840417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.246 [2024-12-10 14:31:13.842256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.246 [2024-12-10 14:31:13.842302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:49.246 [2024-12-10 14:31:13.842317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:25:49.246 [2024-12-10 14:31:13.842328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.859769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.859816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:49.247 [2024-12-10 14:31:13.859830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.449 ms 00:25:49.247 [2024-12-10 14:31:13.859841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.864471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.864511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:49.247 [2024-12-10 14:31:13.864525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.587 ms 00:25:49.247 [2024-12-10 14:31:13.864536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.899529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.899573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:49.247 [2024-12-10 14:31:13.899588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.003 ms 00:25:49.247 [2024-12-10 14:31:13.899599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.919987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.920031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:49.247 [2024-12-10 14:31:13.920046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.380 ms 00:25:49.247 [2024-12-10 14:31:13.920058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.920184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.920205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:49.247 [2024-12-10 14:31:13.920218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:49.247 [2024-12-10 14:31:13.920229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.955312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.955357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:49.247 [2024-12-10 14:31:13.955371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.121 ms 00:25:49.247 [2024-12-10 14:31:13.955382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:13.988714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:13.988768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:49.247 [2024-12-10 14:31:13.988783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.345 ms 00:25:49.247 [2024-12-10 14:31:13.988794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:14.021962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:14.022006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:49.247 [2024-12-10 14:31:14.022020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.180 ms 00:25:49.247 [2024-12-10 14:31:14.022031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:14.054405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:14.054447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:49.247 [2024-12-10 14:31:14.054461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.347 ms 00:25:49.247 [2024-12-10 14:31:14.054471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:14.054512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:49.247 [2024-12-10 14:31:14.054529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.054992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:49.247 [2024-12-10 14:31:14.055727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:49.247 [2024-12-10 14:31:14.055744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:25:49.247 [2024-12-10 14:31:14.055757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:49.247 [2024-12-10 14:31:14.055768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:49.247 [2024-12-10 14:31:14.055778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:49.247 [2024-12-10 14:31:14.055789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:49.247 [2024-12-10 14:31:14.055800] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:49.247 [2024-12-10 14:31:14.055822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:49.247 [2024-12-10 14:31:14.055833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:49.247 [2024-12-10 14:31:14.055843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:49.247 [2024-12-10 14:31:14.055853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:49.247 [2024-12-10 14:31:14.055864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:14.055875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:49.247 [2024-12-10 14:31:14.055887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:25:49.247 [2024-12-10 14:31:14.055899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:14.074155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:14.074195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:49.247 [2024-12-10 14:31:14.074208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.245 ms 00:25:49.247 [2024-12-10 14:31:14.074219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.247 [2024-12-10 14:31:14.074720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.247 [2024-12-10 14:31:14.074741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:49.247 [2024-12-10 14:31:14.074753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:25:49.247 [2024-12-10 14:31:14.074772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.123028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.123070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.507 [2024-12-10 14:31:14.123084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.123095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.123145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.123159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.507 [2024-12-10 14:31:14.123170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.123188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.123278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.123294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.507 [2024-12-10 14:31:14.123305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.123318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.123335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.123347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.507 [2024-12-10 14:31:14.123358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.123369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.238946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.239001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.507 [2024-12-10 14:31:14.239016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.239028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.333886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.333937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.507 [2024-12-10 14:31:14.333953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.333973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.507 [2024-12-10 14:31:14.334089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.507 [2024-12-10 14:31:14.334166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.507 [2024-12-10 14:31:14.334320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:49.507 [2024-12-10 14:31:14.334397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.507 [2024-12-10 14:31:14.334474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.507 [2024-12-10 14:31:14.334539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.507 [2024-12-10 14:31:14.334551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.507 [2024-12-10 14:31:14.334563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.507 [2024-12-10 14:31:14.334709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 499.395 ms, result 0 00:25:50.886 00:25:50.886 00:25:50.886 14:31:15 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:50.886 [2024-12-10 14:31:15.635701] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:25:50.886 [2024-12-10 14:31:15.635827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81104 ] 00:25:51.144 [2024-12-10 14:31:15.819363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.145 [2024-12-10 14:31:15.924648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.714 [2024-12-10 14:31:16.264175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.714 [2024-12-10 14:31:16.264253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.714 [2024-12-10 14:31:16.424767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.424823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:51.714 [2024-12-10 14:31:16.424841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:51.714 [2024-12-10 14:31:16.424853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.424910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.424928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:51.714 [2024-12-10 14:31:16.424941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:51.714 [2024-12-10 14:31:16.424952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.424977] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:51.714 [2024-12-10 14:31:16.425824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:51.714 [2024-12-10 14:31:16.425858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.425870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:51.714 [2024-12-10 14:31:16.425882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:25:51.714 [2024-12-10 14:31:16.425894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.427371] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:51.714 [2024-12-10 14:31:16.445572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.445617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:51.714 [2024-12-10 14:31:16.445633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.232 ms 00:25:51.714 [2024-12-10 14:31:16.445645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.445726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.445741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:51.714 [2024-12-10 14:31:16.445755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:51.714 [2024-12-10 14:31:16.445766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.452638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.452679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:51.714 [2024-12-10 14:31:16.452692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:25:51.714 [2024-12-10 14:31:16.452709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.452786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.452801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:51.714 [2024-12-10 14:31:16.452815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:51.714 [2024-12-10 14:31:16.452826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.452870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.452884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:51.714 [2024-12-10 14:31:16.452895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:51.714 [2024-12-10 14:31:16.452906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.452936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:51.714 [2024-12-10 14:31:16.457403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.457442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:51.714 [2024-12-10 14:31:16.457468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:25:51.714 [2024-12-10 14:31:16.457479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.457515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.457528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:51.714 [2024-12-10 14:31:16.457540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:51.714 [2024-12-10 14:31:16.457551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.457607] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:51.714 [2024-12-10 14:31:16.457635] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:51.714 [2024-12-10 14:31:16.457684] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:51.714 [2024-12-10 14:31:16.457707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:51.714 [2024-12-10 14:31:16.457792] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:51.714 [2024-12-10 14:31:16.457806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:51.714 [2024-12-10 14:31:16.457821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:51.714 [2024-12-10 14:31:16.457835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:51.714 [2024-12-10 14:31:16.457849] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:51.714 [2024-12-10 14:31:16.457861] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:51.714 [2024-12-10 14:31:16.457873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:51.714 [2024-12-10 14:31:16.457888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:51.714 [2024-12-10 14:31:16.457900] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:51.714 [2024-12-10 14:31:16.457912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.457924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:51.714 [2024-12-10 14:31:16.457935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:51.714 [2024-12-10 14:31:16.457946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.714 [2024-12-10 14:31:16.458019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.714 [2024-12-10 14:31:16.458032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:51.714 [2024-12-10 14:31:16.458044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:51.714 [2024-12-10 14:31:16.458055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.715 [2024-12-10 14:31:16.458148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:51.715 [2024-12-10 14:31:16.458165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:51.715 [2024-12-10 14:31:16.458177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:51.715 [2024-12-10 14:31:16.458212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:51.715 [2024-12-10 14:31:16.458246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.715 [2024-12-10 14:31:16.458268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:51.715 [2024-12-10 14:31:16.458278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:51.715 [2024-12-10 14:31:16.458288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.715 [2024-12-10 14:31:16.458310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:51.715 [2024-12-10 14:31:16.458321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:51.715 [2024-12-10 14:31:16.458332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:51.715 [2024-12-10 14:31:16.458353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:51.715 [2024-12-10 14:31:16.458384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:51.715 [2024-12-10 14:31:16.458414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:51.715 [2024-12-10 14:31:16.458444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:51.715 [2024-12-10 14:31:16.458476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:51.715 [2024-12-10 14:31:16.458506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.715 [2024-12-10 14:31:16.458527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:51.715 [2024-12-10 14:31:16.458537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:51.715 [2024-12-10 14:31:16.458547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.715 [2024-12-10 14:31:16.458559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:51.715 [2024-12-10 14:31:16.458570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:51.715 [2024-12-10 14:31:16.458581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:51.715 [2024-12-10 14:31:16.458601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:51.715 [2024-12-10 14:31:16.458611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458620] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:51.715 [2024-12-10 14:31:16.458631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:51.715 [2024-12-10 14:31:16.458642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.715 [2024-12-10 14:31:16.458664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:51.715 [2024-12-10 14:31:16.458706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:51.715 [2024-12-10 14:31:16.458718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:51.715 [2024-12-10 14:31:16.458729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:51.715 [2024-12-10 14:31:16.458741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:51.715 [2024-12-10 14:31:16.458753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:51.715 [2024-12-10 14:31:16.458765] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:51.715 [2024-12-10 14:31:16.458779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:51.715 [2024-12-10 14:31:16.458809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:51.715 [2024-12-10 14:31:16.458821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:51.715 [2024-12-10 14:31:16.458833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:51.715 [2024-12-10 14:31:16.458845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:51.715 [2024-12-10 14:31:16.458857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:51.715 [2024-12-10 14:31:16.458868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:51.715 [2024-12-10 14:31:16.458881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:51.715 [2024-12-10 14:31:16.458893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:51.715 [2024-12-10 14:31:16.458905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:51.715 [2024-12-10 14:31:16.458969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:51.715 [2024-12-10 14:31:16.458983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.458995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:51.715 [2024-12-10 14:31:16.459008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:51.715 [2024-12-10 14:31:16.459019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:51.715 [2024-12-10 14:31:16.459031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:51.715 [2024-12-10 14:31:16.459043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.715 [2024-12-10 14:31:16.459055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:51.715 [2024-12-10 14:31:16.459066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:25:51.715 [2024-12-10 14:31:16.459077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.715 [2024-12-10 14:31:16.492641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.715 [2024-12-10 14:31:16.492689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.715 [2024-12-10 14:31:16.492704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.567 ms 00:25:51.715 [2024-12-10 14:31:16.492721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.715 [2024-12-10 14:31:16.492794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.715 [2024-12-10 14:31:16.492806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:51.715 [2024-12-10 14:31:16.492819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:51.715 [2024-12-10 14:31:16.492830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.975 [2024-12-10 14:31:16.566979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.975 [2024-12-10 14:31:16.567022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.975 [2024-12-10 14:31:16.567037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.207 ms 00:25:51.975 [2024-12-10 14:31:16.567049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.975 [2024-12-10 14:31:16.567093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.975 [2024-12-10 14:31:16.567106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.976 [2024-12-10 14:31:16.567124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:51.976 [2024-12-10 14:31:16.567136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.567631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.567658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.976 [2024-12-10 14:31:16.567683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:51.976 [2024-12-10 14:31:16.567696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.567812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.567827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.976 [2024-12-10 14:31:16.567847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:51.976 [2024-12-10 14:31:16.567858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.587106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.587150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.976 [2024-12-10 14:31:16.587165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 00:25:51.976 [2024-12-10 14:31:16.587176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.605725] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:51.976 [2024-12-10 14:31:16.605771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:51.976 [2024-12-10 14:31:16.605788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.605799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:51.976 [2024-12-10 14:31:16.605812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:25:51.976 [2024-12-10 14:31:16.605823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.634420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.634464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:51.976 [2024-12-10 14:31:16.634479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.594 ms 00:25:51.976 [2024-12-10 14:31:16.634492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.652225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.652269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:51.976 [2024-12-10 14:31:16.652284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.700 ms 00:25:51.976 [2024-12-10 14:31:16.652295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.669253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.669296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:51.976 [2024-12-10 14:31:16.669310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.942 ms 00:25:51.976 [2024-12-10 14:31:16.669322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.670078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.670110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.976 [2024-12-10 14:31:16.670128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:25:51.976 [2024-12-10 14:31:16.670139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.756100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.756159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:51.976 [2024-12-10 14:31:16.756184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.075 ms 00:25:51.976 [2024-12-10 14:31:16.756197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.766250] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:51.976 [2024-12-10 14:31:16.768624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.768659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.976 [2024-12-10 14:31:16.768681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:25:51.976 [2024-12-10 14:31:16.768694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.768772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.768787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:51.976 [2024-12-10 14:31:16.768805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:51.976 [2024-12-10 14:31:16.768816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.768887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.768901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.976 [2024-12-10 14:31:16.768913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:51.976 [2024-12-10 14:31:16.768936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.768959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.768972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:51.976 [2024-12-10 14:31:16.768983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:51.976 [2024-12-10 14:31:16.768994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.769037] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:51.976 [2024-12-10 14:31:16.769051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.769062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:51.976 [2024-12-10 14:31:16.769075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:51.976 [2024-12-10 14:31:16.769086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.804069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.804115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:51.976 [2024-12-10 14:31:16.804138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.017 ms 00:25:51.976 [2024-12-10 14:31:16.804149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.804223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.976 [2024-12-10 14:31:16.804236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:51.976 [2024-12-10 14:31:16.804250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:51.976 [2024-12-10 14:31:16.804261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.976 [2024-12-10 14:31:16.805551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.854 ms, result 0 00:25:53.355  [2024-12-10T14:31:19.126Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-10T14:31:20.063Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-10T14:31:21.441Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-10T14:31:22.009Z] Copying: 96/1024 [MB] (24 MBps) [2024-12-10T14:31:23.387Z] Copying: 119/1024 [MB] (23 MBps) [2024-12-10T14:31:24.325Z] Copying: 143/1024 [MB] (23 MBps) [2024-12-10T14:31:25.262Z] Copying: 167/1024 [MB] (23 MBps) [2024-12-10T14:31:26.199Z] Copying: 190/1024 [MB] (23 MBps) [2024-12-10T14:31:27.137Z] Copying: 213/1024 [MB] (22 MBps) [2024-12-10T14:31:28.075Z] Copying: 237/1024 [MB] (23 MBps) [2024-12-10T14:31:29.014Z] Copying: 261/1024 [MB] (23 MBps) [2024-12-10T14:31:30.394Z] Copying: 284/1024 [MB] (23 MBps) [2024-12-10T14:31:31.332Z] Copying: 307/1024 [MB] (23 MBps) [2024-12-10T14:31:32.270Z] Copying: 331/1024 [MB] (24 MBps) [2024-12-10T14:31:33.208Z] Copying: 355/1024 [MB] (24 MBps) [2024-12-10T14:31:34.146Z] Copying: 379/1024 [MB] (23 MBps) [2024-12-10T14:31:35.135Z] Copying: 403/1024 [MB] (23 MBps) [2024-12-10T14:31:36.074Z] Copying: 427/1024 [MB] (24 MBps) [2024-12-10T14:31:37.011Z] Copying: 451/1024 [MB] (24 MBps) [2024-12-10T14:31:38.390Z] Copying: 475/1024 [MB] (23 MBps) [2024-12-10T14:31:39.327Z] Copying: 499/1024 [MB] (23 MBps) [2024-12-10T14:31:40.264Z] Copying: 523/1024 [MB] (23 MBps) [2024-12-10T14:31:41.200Z] Copying: 546/1024 [MB] (23 MBps) [2024-12-10T14:31:42.137Z] Copying: 570/1024 [MB] (23 MBps) [2024-12-10T14:31:43.075Z] Copying: 594/1024 [MB] (24 MBps) [2024-12-10T14:31:44.011Z] Copying: 618/1024 [MB] (23 MBps) [2024-12-10T14:31:45.389Z] Copying: 642/1024 [MB] (23 MBps) [2024-12-10T14:31:46.326Z] Copying: 666/1024 [MB] (24 MBps) [2024-12-10T14:31:47.263Z] Copying: 690/1024 [MB] (23 MBps) [2024-12-10T14:31:48.200Z] Copying: 713/1024 [MB] (23 MBps) [2024-12-10T14:31:49.137Z] Copying: 737/1024 [MB] (23 MBps) [2024-12-10T14:31:50.074Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-10T14:31:51.011Z] Copying: 785/1024 [MB] (23 MBps) [2024-12-10T14:31:52.389Z] Copying: 809/1024 [MB] (24 MBps) [2024-12-10T14:31:52.961Z] Copying: 833/1024 [MB] (23 MBps) [2024-12-10T14:31:54.345Z] Copying: 857/1024 [MB] (23 MBps) [2024-12-10T14:31:55.283Z] Copying: 881/1024 [MB] (24 MBps) [2024-12-10T14:31:56.222Z] Copying: 905/1024 [MB] (24 MBps) [2024-12-10T14:31:57.244Z] Copying: 929/1024 [MB] (23 MBps) [2024-12-10T14:31:58.181Z] Copying: 953/1024 [MB] (24 MBps) [2024-12-10T14:31:59.118Z] Copying: 980/1024 [MB] (26 MBps) [2024-12-10T14:32:00.057Z] Copying: 1005/1024 [MB] (24 MBps) [2024-12-10T14:32:00.057Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 14:31:59.793487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.793563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:35.223 [2024-12-10 14:31:59.793588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:35.223 [2024-12-10 14:31:59.793602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.793631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:35.223 [2024-12-10 14:31:59.801538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.801599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:35.223 [2024-12-10 14:31:59.801619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.895 ms 00:26:35.223 [2024-12-10 14:31:59.801637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.802007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.802042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:35.223 [2024-12-10 14:31:59.802061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:26:35.223 [2024-12-10 14:31:59.802077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.805882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.805908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:35.223 [2024-12-10 14:31:59.805922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.786 ms 00:26:35.223 [2024-12-10 14:31:59.805939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.811414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.811450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:35.223 [2024-12-10 14:31:59.811464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.463 ms 00:26:35.223 [2024-12-10 14:31:59.811475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.847970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.848025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:35.223 [2024-12-10 14:31:59.848057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.483 ms 00:26:35.223 [2024-12-10 14:31:59.848067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.868043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.868080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:35.223 [2024-12-10 14:31:59.868094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.968 ms 00:26:35.223 [2024-12-10 14:31:59.868105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.868265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.868279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:35.223 [2024-12-10 14:31:59.868290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:35.223 [2024-12-10 14:31:59.868300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.902497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.902532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:35.223 [2024-12-10 14:31:59.902545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.236 ms 00:26:35.223 [2024-12-10 14:31:59.902553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.937031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.937066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:35.223 [2024-12-10 14:31:59.937078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.480 ms 00:26:35.223 [2024-12-10 14:31:59.937087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:31:59.970832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:31:59.970867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:35.223 [2024-12-10 14:31:59.970895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.748 ms 00:26:35.223 [2024-12-10 14:31:59.970904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:32:00.005667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.223 [2024-12-10 14:32:00.005710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:35.223 [2024-12-10 14:32:00.005723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.740 ms 00:26:35.223 [2024-12-10 14:32:00.005733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.223 [2024-12-10 14:32:00.005771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:35.223 [2024-12-10 14:32:00.005797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.005991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:35.223 [2024-12-10 14:32:00.006225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:35.224 [2024-12-10 14:32:00.006902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:35.224 [2024-12-10 14:32:00.006912] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:26:35.224 [2024-12-10 14:32:00.006923] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:35.224 [2024-12-10 14:32:00.006933] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:35.224 [2024-12-10 14:32:00.006943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:35.224 [2024-12-10 14:32:00.006954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:35.224 [2024-12-10 14:32:00.006975] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:35.224 [2024-12-10 14:32:00.006986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:35.224 [2024-12-10 14:32:00.006996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:35.224 [2024-12-10 14:32:00.007006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:35.224 [2024-12-10 14:32:00.007015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:35.224 [2024-12-10 14:32:00.007025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.224 [2024-12-10 14:32:00.007037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:35.224 [2024-12-10 14:32:00.007048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:26:35.224 [2024-12-10 14:32:00.007063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.224 [2024-12-10 14:32:00.027800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.224 [2024-12-10 14:32:00.027834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:35.224 [2024-12-10 14:32:00.027847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.734 ms 00:26:35.224 [2024-12-10 14:32:00.027865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.224 [2024-12-10 14:32:00.028479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.224 [2024-12-10 14:32:00.028497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:35.224 [2024-12-10 14:32:00.028516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:26:35.224 [2024-12-10 14:32:00.028526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.484 [2024-12-10 14:32:00.085358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.484 [2024-12-10 14:32:00.085405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:35.484 [2024-12-10 14:32:00.085437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.484 [2024-12-10 14:32:00.085456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.484 [2024-12-10 14:32:00.085526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.484 [2024-12-10 14:32:00.085539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:35.484 [2024-12-10 14:32:00.085556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.484 [2024-12-10 14:32:00.085567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.484 [2024-12-10 14:32:00.085665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.484 [2024-12-10 14:32:00.085696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:35.484 [2024-12-10 14:32:00.085708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.484 [2024-12-10 14:32:00.085719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.484 [2024-12-10 14:32:00.085738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.484 [2024-12-10 14:32:00.085750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:35.484 [2024-12-10 14:32:00.085761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.484 [2024-12-10 14:32:00.085778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.484 [2024-12-10 14:32:00.217455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.484 [2024-12-10 14:32:00.217531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:35.484 [2024-12-10 14:32:00.217551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.484 [2024-12-10 14:32:00.217579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.743 [2024-12-10 14:32:00.321564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.743 [2024-12-10 14:32:00.321624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.743 [2024-12-10 14:32:00.321648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.743 [2024-12-10 14:32:00.321660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.743 [2024-12-10 14:32:00.321784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.743 [2024-12-10 14:32:00.321799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.743 [2024-12-10 14:32:00.321810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.743 [2024-12-10 14:32:00.321820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.743 [2024-12-10 14:32:00.321875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.743 [2024-12-10 14:32:00.321888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.743 [2024-12-10 14:32:00.321899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.743 [2024-12-10 14:32:00.321911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.743 [2024-12-10 14:32:00.322028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.743 [2024-12-10 14:32:00.322043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.743 [2024-12-10 14:32:00.322054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.743 [2024-12-10 14:32:00.322065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.743 [2024-12-10 14:32:00.322104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.744 [2024-12-10 14:32:00.322118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:35.744 [2024-12-10 14:32:00.322129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.744 [2024-12-10 14:32:00.322139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.744 [2024-12-10 14:32:00.322191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.744 [2024-12-10 14:32:00.322204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.744 [2024-12-10 14:32:00.322215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.744 [2024-12-10 14:32:00.322226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.744 [2024-12-10 14:32:00.322275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.744 [2024-12-10 14:32:00.322289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.744 [2024-12-10 14:32:00.322300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.744 [2024-12-10 14:32:00.322311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.744 [2024-12-10 14:32:00.322459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.794 ms, result 0 00:26:36.681 00:26:36.681 00:26:36.681 14:32:01 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:38.585 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:38.585 14:32:03 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:38.585 [2024-12-10 14:32:03.272492] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:26:38.585 [2024-12-10 14:32:03.272622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81584 ] 00:26:38.844 [2024-12-10 14:32:03.456705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.844 [2024-12-10 14:32:03.591487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.414 [2024-12-10 14:32:03.985722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:39.414 [2024-12-10 14:32:03.985801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:39.414 [2024-12-10 14:32:04.149995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.150055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:39.414 [2024-12-10 14:32:04.150072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:39.414 [2024-12-10 14:32:04.150100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.150151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.150167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.414 [2024-12-10 14:32:04.150178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:39.414 [2024-12-10 14:32:04.150189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.150211] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:39.414 [2024-12-10 14:32:04.151095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:39.414 [2024-12-10 14:32:04.151123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.151135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.414 [2024-12-10 14:32:04.151147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:26:39.414 [2024-12-10 14:32:04.151158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.153591] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:39.414 [2024-12-10 14:32:04.172783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.172822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:39.414 [2024-12-10 14:32:04.172837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.224 ms 00:26:39.414 [2024-12-10 14:32:04.172848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.172921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.172934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:39.414 [2024-12-10 14:32:04.172946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:39.414 [2024-12-10 14:32:04.172956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.184974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.185005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.414 [2024-12-10 14:32:04.185019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.966 ms 00:26:39.414 [2024-12-10 14:32:04.185033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.185119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.185133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.414 [2024-12-10 14:32:04.185144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:39.414 [2024-12-10 14:32:04.185155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.185209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.185221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:39.414 [2024-12-10 14:32:04.185241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:39.414 [2024-12-10 14:32:04.185250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.185280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:39.414 [2024-12-10 14:32:04.190914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.190947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.414 [2024-12-10 14:32:04.190964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.649 ms 00:26:39.414 [2024-12-10 14:32:04.190975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.191012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.191024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:39.414 [2024-12-10 14:32:04.191036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:39.414 [2024-12-10 14:32:04.191047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.191083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:39.414 [2024-12-10 14:32:04.191113] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:39.414 [2024-12-10 14:32:04.191152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:39.414 [2024-12-10 14:32:04.191174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:39.414 [2024-12-10 14:32:04.191265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:39.414 [2024-12-10 14:32:04.191279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:39.414 [2024-12-10 14:32:04.191293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:39.414 [2024-12-10 14:32:04.191307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:39.414 [2024-12-10 14:32:04.191319] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:39.414 [2024-12-10 14:32:04.191331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:39.414 [2024-12-10 14:32:04.191342] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:39.414 [2024-12-10 14:32:04.191356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:39.414 [2024-12-10 14:32:04.191366] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:39.414 [2024-12-10 14:32:04.191378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.191388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:39.414 [2024-12-10 14:32:04.191399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:26:39.414 [2024-12-10 14:32:04.191409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.191479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.414 [2024-12-10 14:32:04.191490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:39.414 [2024-12-10 14:32:04.191501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:39.414 [2024-12-10 14:32:04.191511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.414 [2024-12-10 14:32:04.191609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:39.414 [2024-12-10 14:32:04.191624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:39.414 [2024-12-10 14:32:04.191636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.414 [2024-12-10 14:32:04.191646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.414 [2024-12-10 14:32:04.191657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:39.414 [2024-12-10 14:32:04.191666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:39.414 [2024-12-10 14:32:04.191689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:39.414 [2024-12-10 14:32:04.191699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:39.414 [2024-12-10 14:32:04.191709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:39.414 [2024-12-10 14:32:04.191719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.414 [2024-12-10 14:32:04.191729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:39.415 [2024-12-10 14:32:04.191739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:39.415 [2024-12-10 14:32:04.191749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.415 [2024-12-10 14:32:04.191769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:39.415 [2024-12-10 14:32:04.191780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:39.415 [2024-12-10 14:32:04.191789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:39.415 [2024-12-10 14:32:04.191808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:39.415 [2024-12-10 14:32:04.191818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:39.415 [2024-12-10 14:32:04.191837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.415 [2024-12-10 14:32:04.191856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:39.415 [2024-12-10 14:32:04.191865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.415 [2024-12-10 14:32:04.191883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:39.415 [2024-12-10 14:32:04.191893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.415 [2024-12-10 14:32:04.191911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:39.415 [2024-12-10 14:32:04.191921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.415 [2024-12-10 14:32:04.191939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:39.415 [2024-12-10 14:32:04.191948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:39.415 [2024-12-10 14:32:04.191957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.415 [2024-12-10 14:32:04.191966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:39.415 [2024-12-10 14:32:04.191976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:39.415 [2024-12-10 14:32:04.191984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.415 [2024-12-10 14:32:04.191993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:39.415 [2024-12-10 14:32:04.192004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:39.415 [2024-12-10 14:32:04.192013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.415 [2024-12-10 14:32:04.192023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:39.415 [2024-12-10 14:32:04.192032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:39.415 [2024-12-10 14:32:04.192041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.415 [2024-12-10 14:32:04.192050] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:39.415 [2024-12-10 14:32:04.192060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:39.415 [2024-12-10 14:32:04.192070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.415 [2024-12-10 14:32:04.192080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.415 [2024-12-10 14:32:04.192090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:39.415 [2024-12-10 14:32:04.192100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:39.415 [2024-12-10 14:32:04.192109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:39.415 [2024-12-10 14:32:04.192119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:39.415 [2024-12-10 14:32:04.192128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:39.415 [2024-12-10 14:32:04.192137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:39.415 [2024-12-10 14:32:04.192147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:39.415 [2024-12-10 14:32:04.192160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:39.415 [2024-12-10 14:32:04.192186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:39.415 [2024-12-10 14:32:04.192196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:39.415 [2024-12-10 14:32:04.192207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:39.415 [2024-12-10 14:32:04.192218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:39.415 [2024-12-10 14:32:04.192230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:39.415 [2024-12-10 14:32:04.192240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:39.415 [2024-12-10 14:32:04.192251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:39.415 [2024-12-10 14:32:04.192260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:39.415 [2024-12-10 14:32:04.192271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:39.415 [2024-12-10 14:32:04.192322] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:39.415 [2024-12-10 14:32:04.192335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.415 [2024-12-10 14:32:04.192356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:39.415 [2024-12-10 14:32:04.192366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:39.415 [2024-12-10 14:32:04.192376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:39.415 [2024-12-10 14:32:04.192386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.415 [2024-12-10 14:32:04.192397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:39.415 [2024-12-10 14:32:04.192407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:26:39.415 [2024-12-10 14:32:04.192416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.415 [2024-12-10 14:32:04.239740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.415 [2024-12-10 14:32:04.239777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:39.415 [2024-12-10 14:32:04.239792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.350 ms 00:26:39.415 [2024-12-10 14:32:04.239825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.415 [2024-12-10 14:32:04.239903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.415 [2024-12-10 14:32:04.239915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:39.415 [2024-12-10 14:32:04.239937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:39.415 [2024-12-10 14:32:04.239947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.316928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.316974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:39.675 [2024-12-10 14:32:04.316989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.046 ms 00:26:39.675 [2024-12-10 14:32:04.317001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.317045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.317057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:39.675 [2024-12-10 14:32:04.317081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:39.675 [2024-12-10 14:32:04.317091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.317963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.317986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:39.675 [2024-12-10 14:32:04.317998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:26:39.675 [2024-12-10 14:32:04.318009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.318145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.318159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:39.675 [2024-12-10 14:32:04.318177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:39.675 [2024-12-10 14:32:04.318187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.340084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.340124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:39.675 [2024-12-10 14:32:04.340138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.910 ms 00:26:39.675 [2024-12-10 14:32:04.340166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.359475] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:39.675 [2024-12-10 14:32:04.359516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:39.675 [2024-12-10 14:32:04.359533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.359544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:39.675 [2024-12-10 14:32:04.359555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.280 ms 00:26:39.675 [2024-12-10 14:32:04.359565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.387854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.387894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:39.675 [2024-12-10 14:32:04.387908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.289 ms 00:26:39.675 [2024-12-10 14:32:04.387919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.405254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.405290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:39.675 [2024-12-10 14:32:04.405304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:26:39.675 [2024-12-10 14:32:04.405314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.422410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.422447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:39.675 [2024-12-10 14:32:04.422460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.080 ms 00:26:39.675 [2024-12-10 14:32:04.422469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.675 [2024-12-10 14:32:04.423179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.675 [2024-12-10 14:32:04.423204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:39.675 [2024-12-10 14:32:04.423219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:26:39.675 [2024-12-10 14:32:04.423229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.512497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.512564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:39.935 [2024-12-10 14:32:04.512589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.388 ms 00:26:39.935 [2024-12-10 14:32:04.512601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.522824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:39.935 [2024-12-10 14:32:04.526144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.526176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:39.935 [2024-12-10 14:32:04.526190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.515 ms 00:26:39.935 [2024-12-10 14:32:04.526201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.526285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.526300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:39.935 [2024-12-10 14:32:04.526318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:39.935 [2024-12-10 14:32:04.526329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.526448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.526463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:39.935 [2024-12-10 14:32:04.526476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:39.935 [2024-12-10 14:32:04.526486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.526515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.526527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:39.935 [2024-12-10 14:32:04.526538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:39.935 [2024-12-10 14:32:04.526549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.526607] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:39.935 [2024-12-10 14:32:04.526621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.526631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:39.935 [2024-12-10 14:32:04.526642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:39.935 [2024-12-10 14:32:04.526652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.562496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.562539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:39.935 [2024-12-10 14:32:04.562561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.880 ms 00:26:39.935 [2024-12-10 14:32:04.562572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.562655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.935 [2024-12-10 14:32:04.562679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:39.935 [2024-12-10 14:32:04.562692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:39.935 [2024-12-10 14:32:04.562703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.935 [2024-12-10 14:32:04.564203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.340 ms, result 0 00:26:40.869  [2024-12-10T14:32:06.640Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-10T14:32:07.576Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-10T14:32:08.954Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-10T14:32:09.891Z] Copying: 89/1024 [MB] (21 MBps) [2024-12-10T14:32:10.827Z] Copying: 111/1024 [MB] (21 MBps) [2024-12-10T14:32:11.764Z] Copying: 132/1024 [MB] (21 MBps) [2024-12-10T14:32:12.700Z] Copying: 154/1024 [MB] (21 MBps) [2024-12-10T14:32:13.636Z] Copying: 176/1024 [MB] (21 MBps) [2024-12-10T14:32:14.571Z] Copying: 198/1024 [MB] (22 MBps) [2024-12-10T14:32:15.948Z] Copying: 221/1024 [MB] (23 MBps) [2024-12-10T14:32:16.883Z] Copying: 243/1024 [MB] (21 MBps) [2024-12-10T14:32:17.819Z] Copying: 266/1024 [MB] (22 MBps) [2024-12-10T14:32:18.755Z] Copying: 289/1024 [MB] (23 MBps) [2024-12-10T14:32:19.692Z] Copying: 312/1024 [MB] (22 MBps) [2024-12-10T14:32:20.630Z] Copying: 334/1024 [MB] (22 MBps) [2024-12-10T14:32:21.567Z] Copying: 357/1024 [MB] (23 MBps) [2024-12-10T14:32:22.945Z] Copying: 380/1024 [MB] (23 MBps) [2024-12-10T14:32:23.922Z] Copying: 404/1024 [MB] (24 MBps) [2024-12-10T14:32:24.858Z] Copying: 428/1024 [MB] (23 MBps) [2024-12-10T14:32:25.794Z] Copying: 451/1024 [MB] (23 MBps) [2024-12-10T14:32:26.731Z] Copying: 474/1024 [MB] (22 MBps) [2024-12-10T14:32:27.667Z] Copying: 496/1024 [MB] (22 MBps) [2024-12-10T14:32:28.604Z] Copying: 520/1024 [MB] (23 MBps) [2024-12-10T14:32:29.541Z] Copying: 543/1024 [MB] (23 MBps) [2024-12-10T14:32:30.920Z] Copying: 567/1024 [MB] (23 MBps) [2024-12-10T14:32:31.858Z] Copying: 591/1024 [MB] (23 MBps) [2024-12-10T14:32:32.794Z] Copying: 614/1024 [MB] (23 MBps) [2024-12-10T14:32:33.731Z] Copying: 637/1024 [MB] (23 MBps) [2024-12-10T14:32:34.668Z] Copying: 660/1024 [MB] (22 MBps) [2024-12-10T14:32:35.602Z] Copying: 682/1024 [MB] (21 MBps) [2024-12-10T14:32:36.539Z] Copying: 705/1024 [MB] (22 MBps) [2024-12-10T14:32:37.916Z] Copying: 728/1024 [MB] (23 MBps) [2024-12-10T14:32:38.853Z] Copying: 751/1024 [MB] (23 MBps) [2024-12-10T14:32:39.790Z] Copying: 775/1024 [MB] (23 MBps) [2024-12-10T14:32:40.726Z] Copying: 799/1024 [MB] (24 MBps) [2024-12-10T14:32:41.663Z] Copying: 823/1024 [MB] (24 MBps) [2024-12-10T14:32:42.601Z] Copying: 847/1024 [MB] (23 MBps) [2024-12-10T14:32:43.538Z] Copying: 871/1024 [MB] (24 MBps) [2024-12-10T14:32:44.916Z] Copying: 895/1024 [MB] (23 MBps) [2024-12-10T14:32:45.852Z] Copying: 919/1024 [MB] (23 MBps) [2024-12-10T14:32:46.789Z] Copying: 943/1024 [MB] (24 MBps) [2024-12-10T14:32:47.725Z] Copying: 968/1024 [MB] (24 MBps) [2024-12-10T14:32:48.675Z] Copying: 992/1024 [MB] (24 MBps) [2024-12-10T14:32:49.698Z] Copying: 1016/1024 [MB] (24 MBps) [2024-12-10T14:32:49.698Z] Copying: 1048564/1048576 [kB] (7908 kBps) [2024-12-10T14:32:49.698Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 14:32:49.523288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.523378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:24.864 [2024-12-10 14:32:49.523409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.864 [2024-12-10 14:32:49.523421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.525048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:24.864 [2024-12-10 14:32:49.530283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.530324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:24.864 [2024-12-10 14:32:49.530355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.206 ms 00:27:24.864 [2024-12-10 14:32:49.530380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.542046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.542084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:24.864 [2024-12-10 14:32:49.542116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.351 ms 00:27:24.864 [2024-12-10 14:32:49.542136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.566101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.566162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:24.864 [2024-12-10 14:32:49.566178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.984 ms 00:27:24.864 [2024-12-10 14:32:49.566203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.571109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.571141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:24.864 [2024-12-10 14:32:49.571170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:27:24.864 [2024-12-10 14:32:49.571189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.607000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.607036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:24.864 [2024-12-10 14:32:49.607067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.797 ms 00:27:24.864 [2024-12-10 14:32:49.607078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.864 [2024-12-10 14:32:49.627841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.864 [2024-12-10 14:32:49.627877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:24.864 [2024-12-10 14:32:49.627908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.759 ms 00:27:24.864 [2024-12-10 14:32:49.627919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.747285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.125 [2024-12-10 14:32:49.747340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.125 [2024-12-10 14:32:49.747356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.517 ms 00:27:25.125 [2024-12-10 14:32:49.747368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.781591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.125 [2024-12-10 14:32:49.781627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.125 [2024-12-10 14:32:49.781657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.260 ms 00:27:25.125 [2024-12-10 14:32:49.781668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.815470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.125 [2024-12-10 14:32:49.815505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.125 [2024-12-10 14:32:49.815518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.809 ms 00:27:25.125 [2024-12-10 14:32:49.815528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.849439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.125 [2024-12-10 14:32:49.849494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.125 [2024-12-10 14:32:49.849508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.913 ms 00:27:25.125 [2024-12-10 14:32:49.849518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.882630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.125 [2024-12-10 14:32:49.882665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.125 [2024-12-10 14:32:49.882685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.074 ms 00:27:25.125 [2024-12-10 14:32:49.882695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.125 [2024-12-10 14:32:49.882747] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.125 [2024-12-10 14:32:49.882764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102144 / 261120 wr_cnt: 1 state: open 00:27:25.125 [2024-12-10 14:32:49.882778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.882993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.125 [2024-12-10 14:32:49.883328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.126 [2024-12-10 14:32:49.883899] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.126 [2024-12-10 14:32:49.883909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:27:25.126 [2024-12-10 14:32:49.883921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102144 00:27:25.126 [2024-12-10 14:32:49.883931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103104 00:27:25.126 [2024-12-10 14:32:49.883942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102144 00:27:25.126 [2024-12-10 14:32:49.883953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:27:25.126 [2024-12-10 14:32:49.883980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.126 [2024-12-10 14:32:49.883991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.126 [2024-12-10 14:32:49.884002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.126 [2024-12-10 14:32:49.884011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.126 [2024-12-10 14:32:49.884021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.126 [2024-12-10 14:32:49.884031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.126 [2024-12-10 14:32:49.884043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.126 [2024-12-10 14:32:49.884055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:27:25.126 [2024-12-10 14:32:49.884066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.126 [2024-12-10 14:32:49.904465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.126 [2024-12-10 14:32:49.904497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.126 [2024-12-10 14:32:49.904517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.380 ms 00:27:25.126 [2024-12-10 14:32:49.904528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.126 [2024-12-10 14:32:49.905199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.126 [2024-12-10 14:32:49.905218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.126 [2024-12-10 14:32:49.905230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:27:25.126 [2024-12-10 14:32:49.905240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:49.957719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:49.957758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.386 [2024-12-10 14:32:49.957772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:49.957783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:49.957844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:49.957855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.386 [2024-12-10 14:32:49.957866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:49.957878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:49.957942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:49.957961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.386 [2024-12-10 14:32:49.957973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:49.957983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:49.958002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:49.958013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.386 [2024-12-10 14:32:49.958024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:49.958034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.091303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.091383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.386 [2024-12-10 14:32:50.091401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.091413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.194842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.194919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.386 [2024-12-10 14:32:50.194936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.194949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.386 [2024-12-10 14:32:50.195090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.386 [2024-12-10 14:32:50.195187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.386 [2024-12-10 14:32:50.195599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.386 [2024-12-10 14:32:50.195703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.386 [2024-12-10 14:32:50.195786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.195854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.386 [2024-12-10 14:32:50.195866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.386 [2024-12-10 14:32:50.195878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.386 [2024-12-10 14:32:50.195888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.386 [2024-12-10 14:32:50.196043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 676.240 ms, result 0 00:27:27.291 00:27:27.291 00:27:27.291 14:32:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:27.291 [2024-12-10 14:32:51.907518] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:27:27.291 [2024-12-10 14:32:51.907645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82074 ] 00:27:27.291 [2024-12-10 14:32:52.088898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.550 [2024-12-10 14:32:52.223335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.809 [2024-12-10 14:32:52.629335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.809 [2024-12-10 14:32:52.629413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.069 [2024-12-10 14:32:52.794126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.069 [2024-12-10 14:32:52.794200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.069 [2024-12-10 14:32:52.794219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:28.069 [2024-12-10 14:32:52.794230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.069 [2024-12-10 14:32:52.794284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.069 [2024-12-10 14:32:52.794302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.069 [2024-12-10 14:32:52.794314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:28.069 [2024-12-10 14:32:52.794325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.069 [2024-12-10 14:32:52.794347] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.069 [2024-12-10 14:32:52.795318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.069 [2024-12-10 14:32:52.795348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.069 [2024-12-10 14:32:52.795359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.069 [2024-12-10 14:32:52.795371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:27:28.069 [2024-12-10 14:32:52.795381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.069 [2024-12-10 14:32:52.797800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:28.069 [2024-12-10 14:32:52.817038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.069 [2024-12-10 14:32:52.817074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:28.069 [2024-12-10 14:32:52.817090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.269 ms 00:27:28.069 [2024-12-10 14:32:52.817100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.069 [2024-12-10 14:32:52.817187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.069 [2024-12-10 14:32:52.817200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:28.069 [2024-12-10 14:32:52.817213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:28.070 [2024-12-10 14:32:52.817223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.829326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.829354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.070 [2024-12-10 14:32:52.829367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.051 ms 00:27:28.070 [2024-12-10 14:32:52.829382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.829487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.829502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.070 [2024-12-10 14:32:52.829513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:28.070 [2024-12-10 14:32:52.829523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.829580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.829593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.070 [2024-12-10 14:32:52.829604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:28.070 [2024-12-10 14:32:52.829614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.829644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.070 [2024-12-10 14:32:52.835381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.835412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.070 [2024-12-10 14:32:52.835447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.752 ms 00:27:28.070 [2024-12-10 14:32:52.835457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.835494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.835506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.070 [2024-12-10 14:32:52.835517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:28.070 [2024-12-10 14:32:52.835527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.835563] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:28.070 [2024-12-10 14:32:52.835592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:28.070 [2024-12-10 14:32:52.835630] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:28.070 [2024-12-10 14:32:52.835652] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:28.070 [2024-12-10 14:32:52.835756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.070 [2024-12-10 14:32:52.835787] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.070 [2024-12-10 14:32:52.835801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:28.070 [2024-12-10 14:32:52.835815] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.070 [2024-12-10 14:32:52.835827] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.070 [2024-12-10 14:32:52.835840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:28.070 [2024-12-10 14:32:52.835851] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.070 [2024-12-10 14:32:52.835866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.070 [2024-12-10 14:32:52.835877] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.070 [2024-12-10 14:32:52.835887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.835898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.070 [2024-12-10 14:32:52.835909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:27:28.070 [2024-12-10 14:32:52.835920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.835993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.070 [2024-12-10 14:32:52.836004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.070 [2024-12-10 14:32:52.836015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:28.070 [2024-12-10 14:32:52.836025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.070 [2024-12-10 14:32:52.836125] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.070 [2024-12-10 14:32:52.836140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.070 [2024-12-10 14:32:52.836152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.070 [2024-12-10 14:32:52.836183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.070 [2024-12-10 14:32:52.836212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.070 [2024-12-10 14:32:52.836230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.070 [2024-12-10 14:32:52.836239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:28.070 [2024-12-10 14:32:52.836251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.070 [2024-12-10 14:32:52.836272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.070 [2024-12-10 14:32:52.836282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:28.070 [2024-12-10 14:32:52.836291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.070 [2024-12-10 14:32:52.836310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.070 [2024-12-10 14:32:52.836338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.070 [2024-12-10 14:32:52.836367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.070 [2024-12-10 14:32:52.836397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.070 [2024-12-10 14:32:52.836425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.070 [2024-12-10 14:32:52.836452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.070 [2024-12-10 14:32:52.836470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.070 [2024-12-10 14:32:52.836479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:28.070 [2024-12-10 14:32:52.836488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.070 [2024-12-10 14:32:52.836497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.070 [2024-12-10 14:32:52.836506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:28.070 [2024-12-10 14:32:52.836516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.070 [2024-12-10 14:32:52.836535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:28.070 [2024-12-10 14:32:52.836544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836553] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.070 [2024-12-10 14:32:52.836564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.070 [2024-12-10 14:32:52.836578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.070 [2024-12-10 14:32:52.836597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.070 [2024-12-10 14:32:52.836608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.070 [2024-12-10 14:32:52.836617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.070 [2024-12-10 14:32:52.836627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.070 [2024-12-10 14:32:52.836636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.070 [2024-12-10 14:32:52.836645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.070 [2024-12-10 14:32:52.836657] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.070 [2024-12-10 14:32:52.836670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.070 [2024-12-10 14:32:52.836695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:28.070 [2024-12-10 14:32:52.836706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:28.070 [2024-12-10 14:32:52.836717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:28.070 [2024-12-10 14:32:52.836728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:28.070 [2024-12-10 14:32:52.836738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:28.070 [2024-12-10 14:32:52.836749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:28.070 [2024-12-10 14:32:52.836759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:28.071 [2024-12-10 14:32:52.836770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:28.071 [2024-12-10 14:32:52.836781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:28.071 [2024-12-10 14:32:52.836791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:28.071 [2024-12-10 14:32:52.836844] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.071 [2024-12-10 14:32:52.836856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.071 [2024-12-10 14:32:52.836877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.071 [2024-12-10 14:32:52.836888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.071 [2024-12-10 14:32:52.836899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.071 [2024-12-10 14:32:52.836910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.071 [2024-12-10 14:32:52.836921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.071 [2024-12-10 14:32:52.836932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:27:28.071 [2024-12-10 14:32:52.836942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.071 [2024-12-10 14:32:52.883128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.071 [2024-12-10 14:32:52.883165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.071 [2024-12-10 14:32:52.883195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.210 ms 00:27:28.071 [2024-12-10 14:32:52.883212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.071 [2024-12-10 14:32:52.883292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.071 [2024-12-10 14:32:52.883304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:28.071 [2024-12-10 14:32:52.883316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:28.071 [2024-12-10 14:32:52.883327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:52.961385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:52.961425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.330 [2024-12-10 14:32:52.961462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.121 ms 00:27:28.330 [2024-12-10 14:32:52.961473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:52.961522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:52.961535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.330 [2024-12-10 14:32:52.961552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:28.330 [2024-12-10 14:32:52.961563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:52.962399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:52.962417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.330 [2024-12-10 14:32:52.962429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:27:28.330 [2024-12-10 14:32:52.962439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:52.962577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:52.962591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.330 [2024-12-10 14:32:52.962611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:28.330 [2024-12-10 14:32:52.962621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:52.985372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:52.985410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.330 [2024-12-10 14:32:52.985441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.765 ms 00:27:28.330 [2024-12-10 14:32:52.985459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:53.004441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:28.330 [2024-12-10 14:32:53.004480] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:28.330 [2024-12-10 14:32:53.004496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:53.004508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:28.330 [2024-12-10 14:32:53.004520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.955 ms 00:27:28.330 [2024-12-10 14:32:53.004531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:53.033329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:53.033367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:28.330 [2024-12-10 14:32:53.033381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.798 ms 00:27:28.330 [2024-12-10 14:32:53.033392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:53.050862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:53.050908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:28.330 [2024-12-10 14:32:53.050921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.431 ms 00:27:28.330 [2024-12-10 14:32:53.050931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.330 [2024-12-10 14:32:53.067878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.330 [2024-12-10 14:32:53.067910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:28.331 [2024-12-10 14:32:53.067923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.920 ms 00:27:28.331 [2024-12-10 14:32:53.067932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.331 [2024-12-10 14:32:53.068712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.331 [2024-12-10 14:32:53.068736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:28.331 [2024-12-10 14:32:53.068754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:27:28.331 [2024-12-10 14:32:53.068765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.162880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.162940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:28.590 [2024-12-10 14:32:53.162966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.243 ms 00:27:28.590 [2024-12-10 14:32:53.162978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.173491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:28.590 [2024-12-10 14:32:53.177531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.177565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:28.590 [2024-12-10 14:32:53.177580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.520 ms 00:27:28.590 [2024-12-10 14:32:53.177591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.177702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.177717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:28.590 [2024-12-10 14:32:53.177734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:28.590 [2024-12-10 14:32:53.177745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.179880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.179918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:28.590 [2024-12-10 14:32:53.179931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:27:28.590 [2024-12-10 14:32:53.179942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.179983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.179995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:28.590 [2024-12-10 14:32:53.180006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:28.590 [2024-12-10 14:32:53.180017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.180066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:28.590 [2024-12-10 14:32:53.180079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.180090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:28.590 [2024-12-10 14:32:53.180102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:28.590 [2024-12-10 14:32:53.180129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.215579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.215621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:28.590 [2024-12-10 14:32:53.215658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.487 ms 00:27:28.590 [2024-12-10 14:32:53.215670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.215763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.590 [2024-12-10 14:32:53.215777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:28.590 [2024-12-10 14:32:53.215788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:28.590 [2024-12-10 14:32:53.215799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.590 [2024-12-10 14:32:53.217252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 423.246 ms, result 0 00:27:29.968  [2024-12-10T14:32:55.738Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-10T14:32:56.676Z] Copying: 42/1024 [MB] (23 MBps) [2024-12-10T14:32:57.615Z] Copying: 65/1024 [MB] (22 MBps) [2024-12-10T14:32:58.553Z] Copying: 87/1024 [MB] (22 MBps) [2024-12-10T14:32:59.490Z] Copying: 110/1024 [MB] (22 MBps) [2024-12-10T14:33:00.433Z] Copying: 133/1024 [MB] (22 MBps) [2024-12-10T14:33:01.811Z] Copying: 156/1024 [MB] (23 MBps) [2024-12-10T14:33:02.750Z] Copying: 179/1024 [MB] (23 MBps) [2024-12-10T14:33:03.688Z] Copying: 202/1024 [MB] (23 MBps) [2024-12-10T14:33:04.626Z] Copying: 225/1024 [MB] (23 MBps) [2024-12-10T14:33:05.564Z] Copying: 249/1024 [MB] (23 MBps) [2024-12-10T14:33:06.502Z] Copying: 272/1024 [MB] (23 MBps) [2024-12-10T14:33:07.440Z] Copying: 295/1024 [MB] (23 MBps) [2024-12-10T14:33:08.826Z] Copying: 319/1024 [MB] (23 MBps) [2024-12-10T14:33:09.764Z] Copying: 342/1024 [MB] (23 MBps) [2024-12-10T14:33:10.702Z] Copying: 366/1024 [MB] (23 MBps) [2024-12-10T14:33:11.638Z] Copying: 390/1024 [MB] (24 MBps) [2024-12-10T14:33:12.575Z] Copying: 415/1024 [MB] (25 MBps) [2024-12-10T14:33:13.510Z] Copying: 440/1024 [MB] (25 MBps) [2024-12-10T14:33:14.461Z] Copying: 466/1024 [MB] (25 MBps) [2024-12-10T14:33:15.410Z] Copying: 490/1024 [MB] (24 MBps) [2024-12-10T14:33:16.789Z] Copying: 515/1024 [MB] (24 MBps) [2024-12-10T14:33:17.726Z] Copying: 539/1024 [MB] (24 MBps) [2024-12-10T14:33:18.663Z] Copying: 564/1024 [MB] (24 MBps) [2024-12-10T14:33:19.600Z] Copying: 589/1024 [MB] (24 MBps) [2024-12-10T14:33:20.538Z] Copying: 613/1024 [MB] (24 MBps) [2024-12-10T14:33:21.472Z] Copying: 637/1024 [MB] (24 MBps) [2024-12-10T14:33:22.409Z] Copying: 663/1024 [MB] (25 MBps) [2024-12-10T14:33:23.786Z] Copying: 687/1024 [MB] (24 MBps) [2024-12-10T14:33:24.723Z] Copying: 711/1024 [MB] (24 MBps) [2024-12-10T14:33:25.659Z] Copying: 736/1024 [MB] (25 MBps) [2024-12-10T14:33:26.597Z] Copying: 761/1024 [MB] (25 MBps) [2024-12-10T14:33:27.534Z] Copying: 787/1024 [MB] (25 MBps) [2024-12-10T14:33:28.471Z] Copying: 811/1024 [MB] (24 MBps) [2024-12-10T14:33:29.408Z] Copying: 837/1024 [MB] (25 MBps) [2024-12-10T14:33:30.785Z] Copying: 862/1024 [MB] (25 MBps) [2024-12-10T14:33:31.721Z] Copying: 887/1024 [MB] (25 MBps) [2024-12-10T14:33:32.657Z] Copying: 911/1024 [MB] (24 MBps) [2024-12-10T14:33:33.592Z] Copying: 936/1024 [MB] (24 MBps) [2024-12-10T14:33:34.529Z] Copying: 961/1024 [MB] (24 MBps) [2024-12-10T14:33:35.466Z] Copying: 985/1024 [MB] (24 MBps) [2024-12-10T14:33:36.034Z] Copying: 1010/1024 [MB] (24 MBps) [2024-12-10T14:33:36.603Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 14:33:36.391571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.769 [2024-12-10 14:33:36.391686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.769 [2024-12-10 14:33:36.391740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:11.769 [2024-12-10 14:33:36.391759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.769 [2024-12-10 14:33:36.391800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.769 [2024-12-10 14:33:36.397763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.769 [2024-12-10 14:33:36.397798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.769 [2024-12-10 14:33:36.397813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:28:11.769 [2024-12-10 14:33:36.397825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.769 [2024-12-10 14:33:36.398083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.769 [2024-12-10 14:33:36.398098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.769 [2024-12-10 14:33:36.398111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:28:11.769 [2024-12-10 14:33:36.398132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.769 [2024-12-10 14:33:36.403974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.769 [2024-12-10 14:33:36.404121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.769 [2024-12-10 14:33:36.404196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:28:11.769 [2024-12-10 14:33:36.404232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.769 [2024-12-10 14:33:36.409144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.769 [2024-12-10 14:33:36.409267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.770 [2024-12-10 14:33:36.409412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:28:11.770 [2024-12-10 14:33:36.409467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.770 [2024-12-10 14:33:36.445753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.770 [2024-12-10 14:33:36.445914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.770 [2024-12-10 14:33:36.446049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.253 ms 00:28:11.770 [2024-12-10 14:33:36.446086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.770 [2024-12-10 14:33:36.466228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.770 [2024-12-10 14:33:36.466352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.770 [2024-12-10 14:33:36.466550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.117 ms 00:28:11.770 [2024-12-10 14:33:36.466588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.030 [2024-12-10 14:33:36.620722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.030 [2024-12-10 14:33:36.620857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:12.030 [2024-12-10 14:33:36.620935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 154.322 ms 00:28:12.030 [2024-12-10 14:33:36.620973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.030 [2024-12-10 14:33:36.656077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.030 [2024-12-10 14:33:36.656201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:12.030 [2024-12-10 14:33:36.656286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.118 ms 00:28:12.030 [2024-12-10 14:33:36.656320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.030 [2024-12-10 14:33:36.690528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.030 [2024-12-10 14:33:36.690652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:12.030 [2024-12-10 14:33:36.690794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.149 ms 00:28:12.030 [2024-12-10 14:33:36.690833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.030 [2024-12-10 14:33:36.724539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.030 [2024-12-10 14:33:36.724687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:12.030 [2024-12-10 14:33:36.724829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.705 ms 00:28:12.030 [2024-12-10 14:33:36.724867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.031 [2024-12-10 14:33:36.759498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.031 [2024-12-10 14:33:36.759650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:12.031 [2024-12-10 14:33:36.759747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.591 ms 00:28:12.031 [2024-12-10 14:33:36.759784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.031 [2024-12-10 14:33:36.759843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:12.031 [2024-12-10 14:33:36.759886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:12.031 [2024-12-10 14:33:36.759937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.760987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:12.031 [2024-12-10 14:33:36.761830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:12.032 [2024-12-10 14:33:36.761987] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:12.032 [2024-12-10 14:33:36.761998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8b0d425-e333-4ad6-90c7-15392a67b1ea 00:28:12.032 [2024-12-10 14:33:36.762009] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:12.032 [2024-12-10 14:33:36.762019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29888 00:28:12.032 [2024-12-10 14:33:36.762029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28928 00:28:12.032 [2024-12-10 14:33:36.762041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0332 00:28:12.032 [2024-12-10 14:33:36.762058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:12.032 [2024-12-10 14:33:36.762081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:12.032 [2024-12-10 14:33:36.762093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:12.032 [2024-12-10 14:33:36.762102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:12.032 [2024-12-10 14:33:36.762112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:12.032 [2024-12-10 14:33:36.762123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.032 [2024-12-10 14:33:36.762134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:12.032 [2024-12-10 14:33:36.762146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:28:12.032 [2024-12-10 14:33:36.762156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.782592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.032 [2024-12-10 14:33:36.782722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:12.032 [2024-12-10 14:33:36.782765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.429 ms 00:28:12.032 [2024-12-10 14:33:36.782776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.783344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.032 [2024-12-10 14:33:36.783360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:12.032 [2024-12-10 14:33:36.783372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:28:12.032 [2024-12-10 14:33:36.783382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.836450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.032 [2024-12-10 14:33:36.836489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:12.032 [2024-12-10 14:33:36.836502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.032 [2024-12-10 14:33:36.836528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.836589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.032 [2024-12-10 14:33:36.836600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:12.032 [2024-12-10 14:33:36.836611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.032 [2024-12-10 14:33:36.836621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.836720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.032 [2024-12-10 14:33:36.836735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:12.032 [2024-12-10 14:33:36.836752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.032 [2024-12-10 14:33:36.836762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.032 [2024-12-10 14:33:36.836779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.032 [2024-12-10 14:33:36.836800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:12.032 [2024-12-10 14:33:36.836810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.032 [2024-12-10 14:33:36.836840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:36.965297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:36.965557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:12.292 [2024-12-10 14:33:36.965582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:36.965594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.069731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.069782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:12.292 [2024-12-10 14:33:37.069798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.069809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.069923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.069936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.292 [2024-12-10 14:33:37.069948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.069963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.070023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.292 [2024-12-10 14:33:37.070034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.070044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.070211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.292 [2024-12-10 14:33:37.070222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.070233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.070289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:12.292 [2024-12-10 14:33:37.070300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.070311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.070370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.292 [2024-12-10 14:33:37.070381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.070392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.292 [2024-12-10 14:33:37.070458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.292 [2024-12-10 14:33:37.070468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.292 [2024-12-10 14:33:37.070479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.292 [2024-12-10 14:33:37.070625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 680.121 ms, result 0 00:28:13.671 00:28:13.671 00:28:13.671 14:33:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:15.621 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:15.621 14:33:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:15.621 14:33:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:15.621 14:33:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80355 00:28:15.621 14:33:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80355 ']' 00:28:15.621 14:33:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80355 00:28:15.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80355) - No such process 00:28:15.621 Process with pid 80355 is not found 00:28:15.621 14:33:40 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 80355 is not found' 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:15.621 Remove shared memory files 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:15.621 14:33:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:15.621 ************************************ 00:28:15.621 END TEST ftl_restore 00:28:15.621 ************************************ 00:28:15.621 00:28:15.621 real 3m35.339s 00:28:15.621 user 3m21.108s 00:28:15.621 sys 0m15.501s 00:28:15.621 14:33:40 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.621 14:33:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 14:33:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:15.621 14:33:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:15.621 14:33:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.621 14:33:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:15.621 ************************************ 00:28:15.621 START TEST ftl_dirty_shutdown 00:28:15.621 ************************************ 00:28:15.621 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:15.622 * Looking for test storage... 00:28:15.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.622 --rc genhtml_branch_coverage=1 00:28:15.622 --rc genhtml_function_coverage=1 00:28:15.622 --rc genhtml_legend=1 00:28:15.622 --rc geninfo_all_blocks=1 00:28:15.622 --rc geninfo_unexecuted_blocks=1 00:28:15.622 00:28:15.622 ' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.622 --rc genhtml_branch_coverage=1 00:28:15.622 --rc genhtml_function_coverage=1 00:28:15.622 --rc genhtml_legend=1 00:28:15.622 --rc geninfo_all_blocks=1 00:28:15.622 --rc geninfo_unexecuted_blocks=1 00:28:15.622 00:28:15.622 ' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.622 --rc genhtml_branch_coverage=1 00:28:15.622 --rc genhtml_function_coverage=1 00:28:15.622 --rc genhtml_legend=1 00:28:15.622 --rc geninfo_all_blocks=1 00:28:15.622 --rc geninfo_unexecuted_blocks=1 00:28:15.622 00:28:15.622 ' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.622 --rc genhtml_branch_coverage=1 00:28:15.622 --rc genhtml_function_coverage=1 00:28:15.622 --rc genhtml_legend=1 00:28:15.622 --rc geninfo_all_blocks=1 00:28:15.622 --rc geninfo_unexecuted_blocks=1 00:28:15.622 00:28:15.622 ' 00:28:15.622 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82627 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82627 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82627 ']' 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.905 14:33:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.905 [2024-12-10 14:33:40.597251] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:28:15.905 [2024-12-10 14:33:40.597503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82627 ] 00:28:16.164 [2024-12-10 14:33:40.776972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.164 [2024-12-10 14:33:40.908797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:17.101 14:33:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:17.360 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:17.619 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:17.619 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:17.620 { 00:28:17.620 "name": "nvme0n1", 00:28:17.620 "aliases": [ 00:28:17.620 "15a01015-53c0-4e34-abd1-7a5b332f3120" 00:28:17.620 ], 00:28:17.620 "product_name": "NVMe disk", 00:28:17.620 "block_size": 4096, 00:28:17.620 "num_blocks": 1310720, 00:28:17.620 "uuid": "15a01015-53c0-4e34-abd1-7a5b332f3120", 00:28:17.620 "numa_id": -1, 00:28:17.620 "assigned_rate_limits": { 00:28:17.620 "rw_ios_per_sec": 0, 00:28:17.620 "rw_mbytes_per_sec": 0, 00:28:17.620 "r_mbytes_per_sec": 0, 00:28:17.620 "w_mbytes_per_sec": 0 00:28:17.620 }, 00:28:17.620 "claimed": true, 00:28:17.620 "claim_type": "read_many_write_one", 00:28:17.620 "zoned": false, 00:28:17.620 "supported_io_types": { 00:28:17.620 "read": true, 00:28:17.620 "write": true, 00:28:17.620 "unmap": true, 00:28:17.620 "flush": true, 00:28:17.620 "reset": true, 00:28:17.620 "nvme_admin": true, 00:28:17.620 "nvme_io": true, 00:28:17.620 "nvme_io_md": false, 00:28:17.620 "write_zeroes": true, 00:28:17.620 "zcopy": false, 00:28:17.620 "get_zone_info": false, 00:28:17.620 "zone_management": false, 00:28:17.620 "zone_append": false, 00:28:17.620 "compare": true, 00:28:17.620 "compare_and_write": false, 00:28:17.620 "abort": true, 00:28:17.620 "seek_hole": false, 00:28:17.620 "seek_data": false, 00:28:17.620 "copy": true, 00:28:17.620 "nvme_iov_md": false 00:28:17.620 }, 00:28:17.620 "driver_specific": { 00:28:17.620 "nvme": [ 00:28:17.620 { 00:28:17.620 "pci_address": "0000:00:11.0", 00:28:17.620 "trid": { 00:28:17.620 "trtype": "PCIe", 00:28:17.620 "traddr": "0000:00:11.0" 00:28:17.620 }, 00:28:17.620 "ctrlr_data": { 00:28:17.620 "cntlid": 0, 00:28:17.620 "vendor_id": "0x1b36", 00:28:17.620 "model_number": "QEMU NVMe Ctrl", 00:28:17.620 "serial_number": "12341", 00:28:17.620 "firmware_revision": "8.0.0", 00:28:17.620 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:17.620 "oacs": { 00:28:17.620 "security": 0, 00:28:17.620 "format": 1, 00:28:17.620 "firmware": 0, 00:28:17.620 "ns_manage": 1 00:28:17.620 }, 00:28:17.620 "multi_ctrlr": false, 00:28:17.620 "ana_reporting": false 00:28:17.620 }, 00:28:17.620 "vs": { 00:28:17.620 "nvme_version": "1.4" 00:28:17.620 }, 00:28:17.620 "ns_data": { 00:28:17.620 "id": 1, 00:28:17.620 "can_share": false 00:28:17.620 } 00:28:17.620 } 00:28:17.620 ], 00:28:17.620 "mp_policy": "active_passive" 00:28:17.620 } 00:28:17.620 } 00:28:17.620 ]' 00:28:17.620 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:17.620 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:17.620 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=966ebfc0-5018-419c-bdae-ee404747839b 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:17.880 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 966ebfc0-5018-419c-bdae-ee404747839b 00:28:18.139 14:33:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:18.398 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=c40a3b09-1001-4e5f-a3c4-ee1241607ff3 00:28:18.398 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c40a3b09-1001-4e5f-a3c4-ee1241607ff3 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:18.657 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:18.916 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:18.916 { 00:28:18.916 "name": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:18.916 "aliases": [ 00:28:18.916 "lvs/nvme0n1p0" 00:28:18.916 ], 00:28:18.916 "product_name": "Logical Volume", 00:28:18.916 "block_size": 4096, 00:28:18.916 "num_blocks": 26476544, 00:28:18.916 "uuid": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:18.916 "assigned_rate_limits": { 00:28:18.916 "rw_ios_per_sec": 0, 00:28:18.916 "rw_mbytes_per_sec": 0, 00:28:18.916 "r_mbytes_per_sec": 0, 00:28:18.916 "w_mbytes_per_sec": 0 00:28:18.916 }, 00:28:18.916 "claimed": false, 00:28:18.916 "zoned": false, 00:28:18.916 "supported_io_types": { 00:28:18.916 "read": true, 00:28:18.916 "write": true, 00:28:18.916 "unmap": true, 00:28:18.916 "flush": false, 00:28:18.916 "reset": true, 00:28:18.916 "nvme_admin": false, 00:28:18.916 "nvme_io": false, 00:28:18.916 "nvme_io_md": false, 00:28:18.916 "write_zeroes": true, 00:28:18.916 "zcopy": false, 00:28:18.916 "get_zone_info": false, 00:28:18.916 "zone_management": false, 00:28:18.916 "zone_append": false, 00:28:18.916 "compare": false, 00:28:18.916 "compare_and_write": false, 00:28:18.916 "abort": false, 00:28:18.916 "seek_hole": true, 00:28:18.916 "seek_data": true, 00:28:18.916 "copy": false, 00:28:18.916 "nvme_iov_md": false 00:28:18.916 }, 00:28:18.916 "driver_specific": { 00:28:18.916 "lvol": { 00:28:18.916 "lvol_store_uuid": "c40a3b09-1001-4e5f-a3c4-ee1241607ff3", 00:28:18.916 "base_bdev": "nvme0n1", 00:28:18.916 "thin_provision": true, 00:28:18.916 "num_allocated_clusters": 0, 00:28:18.916 "snapshot": false, 00:28:18.916 "clone": false, 00:28:18.916 "esnap_clone": false 00:28:18.916 } 00:28:18.916 } 00:28:18.916 } 00:28:18.916 ]' 00:28:18.916 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:18.916 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:18.916 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:18.916 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:18.917 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:18.917 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:18.917 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:18.917 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:18.917 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:19.176 14:33:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.435 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.435 { 00:28:19.435 "name": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:19.435 "aliases": [ 00:28:19.435 "lvs/nvme0n1p0" 00:28:19.435 ], 00:28:19.435 "product_name": "Logical Volume", 00:28:19.435 "block_size": 4096, 00:28:19.435 "num_blocks": 26476544, 00:28:19.435 "uuid": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:19.435 "assigned_rate_limits": { 00:28:19.435 "rw_ios_per_sec": 0, 00:28:19.435 "rw_mbytes_per_sec": 0, 00:28:19.435 "r_mbytes_per_sec": 0, 00:28:19.435 "w_mbytes_per_sec": 0 00:28:19.435 }, 00:28:19.435 "claimed": false, 00:28:19.435 "zoned": false, 00:28:19.435 "supported_io_types": { 00:28:19.435 "read": true, 00:28:19.435 "write": true, 00:28:19.435 "unmap": true, 00:28:19.435 "flush": false, 00:28:19.435 "reset": true, 00:28:19.435 "nvme_admin": false, 00:28:19.435 "nvme_io": false, 00:28:19.435 "nvme_io_md": false, 00:28:19.435 "write_zeroes": true, 00:28:19.435 "zcopy": false, 00:28:19.435 "get_zone_info": false, 00:28:19.435 "zone_management": false, 00:28:19.435 "zone_append": false, 00:28:19.435 "compare": false, 00:28:19.435 "compare_and_write": false, 00:28:19.435 "abort": false, 00:28:19.435 "seek_hole": true, 00:28:19.435 "seek_data": true, 00:28:19.435 "copy": false, 00:28:19.435 "nvme_iov_md": false 00:28:19.435 }, 00:28:19.435 "driver_specific": { 00:28:19.435 "lvol": { 00:28:19.435 "lvol_store_uuid": "c40a3b09-1001-4e5f-a3c4-ee1241607ff3", 00:28:19.435 "base_bdev": "nvme0n1", 00:28:19.435 "thin_provision": true, 00:28:19.435 "num_allocated_clusters": 0, 00:28:19.435 "snapshot": false, 00:28:19.435 "clone": false, 00:28:19.435 "esnap_clone": false 00:28:19.435 } 00:28:19.435 } 00:28:19.435 } 00:28:19.435 ]' 00:28:19.435 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.435 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.435 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:19.694 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ba8c19f-4271-41b7-b78e-525575c3ef7f 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.953 { 00:28:19.953 "name": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:19.953 "aliases": [ 00:28:19.953 "lvs/nvme0n1p0" 00:28:19.953 ], 00:28:19.953 "product_name": "Logical Volume", 00:28:19.953 "block_size": 4096, 00:28:19.953 "num_blocks": 26476544, 00:28:19.953 "uuid": "4ba8c19f-4271-41b7-b78e-525575c3ef7f", 00:28:19.953 "assigned_rate_limits": { 00:28:19.953 "rw_ios_per_sec": 0, 00:28:19.953 "rw_mbytes_per_sec": 0, 00:28:19.953 "r_mbytes_per_sec": 0, 00:28:19.953 "w_mbytes_per_sec": 0 00:28:19.953 }, 00:28:19.953 "claimed": false, 00:28:19.953 "zoned": false, 00:28:19.953 "supported_io_types": { 00:28:19.953 "read": true, 00:28:19.953 "write": true, 00:28:19.953 "unmap": true, 00:28:19.953 "flush": false, 00:28:19.953 "reset": true, 00:28:19.953 "nvme_admin": false, 00:28:19.953 "nvme_io": false, 00:28:19.953 "nvme_io_md": false, 00:28:19.953 "write_zeroes": true, 00:28:19.953 "zcopy": false, 00:28:19.953 "get_zone_info": false, 00:28:19.953 "zone_management": false, 00:28:19.953 "zone_append": false, 00:28:19.953 "compare": false, 00:28:19.953 "compare_and_write": false, 00:28:19.953 "abort": false, 00:28:19.953 "seek_hole": true, 00:28:19.953 "seek_data": true, 00:28:19.953 "copy": false, 00:28:19.953 "nvme_iov_md": false 00:28:19.953 }, 00:28:19.953 "driver_specific": { 00:28:19.953 "lvol": { 00:28:19.953 "lvol_store_uuid": "c40a3b09-1001-4e5f-a3c4-ee1241607ff3", 00:28:19.953 "base_bdev": "nvme0n1", 00:28:19.953 "thin_provision": true, 00:28:19.953 "num_allocated_clusters": 0, 00:28:19.953 "snapshot": false, 00:28:19.953 "clone": false, 00:28:19.953 "esnap_clone": false 00:28:19.953 } 00:28:19.953 } 00:28:19.953 } 00:28:19.953 ]' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4ba8c19f-4271-41b7-b78e-525575c3ef7f --l2p_dram_limit 10' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:19.953 14:33:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ba8c19f-4271-41b7-b78e-525575c3ef7f --l2p_dram_limit 10 -c nvc0n1p0 00:28:20.213 [2024-12-10 14:33:44.960210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.960266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:20.213 [2024-12-10 14:33:44.960288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:20.213 [2024-12-10 14:33:44.960300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.213 [2024-12-10 14:33:44.960380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.960392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.213 [2024-12-10 14:33:44.960407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:20.213 [2024-12-10 14:33:44.960418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.213 [2024-12-10 14:33:44.960449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:20.213 [2024-12-10 14:33:44.961532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:20.213 [2024-12-10 14:33:44.961573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.961585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.213 [2024-12-10 14:33:44.961600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:28:20.213 [2024-12-10 14:33:44.961611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.213 [2024-12-10 14:33:44.961736] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b799b7cb-f6a0-41cf-a325-8a1affd9b1f3 00:28:20.213 [2024-12-10 14:33:44.964066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.964229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:20.213 [2024-12-10 14:33:44.964251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:20.213 [2024-12-10 14:33:44.964266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.213 [2024-12-10 14:33:44.978321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.978358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.213 [2024-12-10 14:33:44.978372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.997 ms 00:28:20.213 [2024-12-10 14:33:44.978385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.213 [2024-12-10 14:33:44.978493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.213 [2024-12-10 14:33:44.978511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.214 [2024-12-10 14:33:44.978522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:20.214 [2024-12-10 14:33:44.978539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.978601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.214 [2024-12-10 14:33:44.978616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:20.214 [2024-12-10 14:33:44.978631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:20.214 [2024-12-10 14:33:44.978644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.978691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:20.214 [2024-12-10 14:33:44.985309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.214 [2024-12-10 14:33:44.985340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.214 [2024-12-10 14:33:44.985357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.653 ms 00:28:20.214 [2024-12-10 14:33:44.985367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.985409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.214 [2024-12-10 14:33:44.985420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:20.214 [2024-12-10 14:33:44.985433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:20.214 [2024-12-10 14:33:44.985443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.985505] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:20.214 [2024-12-10 14:33:44.985651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:20.214 [2024-12-10 14:33:44.985672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:20.214 [2024-12-10 14:33:44.985699] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:20.214 [2024-12-10 14:33:44.985717] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:20.214 [2024-12-10 14:33:44.985729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:20.214 [2024-12-10 14:33:44.985745] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:20.214 [2024-12-10 14:33:44.985756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:20.214 [2024-12-10 14:33:44.985775] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:20.214 [2024-12-10 14:33:44.985785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:20.214 [2024-12-10 14:33:44.985799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.214 [2024-12-10 14:33:44.985842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:20.214 [2024-12-10 14:33:44.985857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:20.214 [2024-12-10 14:33:44.985884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.985965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.214 [2024-12-10 14:33:44.985977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:20.214 [2024-12-10 14:33:44.985990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:20.214 [2024-12-10 14:33:44.986001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.214 [2024-12-10 14:33:44.986107] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:20.214 [2024-12-10 14:33:44.986122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:20.214 [2024-12-10 14:33:44.986137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:20.214 [2024-12-10 14:33:44.986172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:20.214 [2024-12-10 14:33:44.986207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.214 [2024-12-10 14:33:44.986230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:20.214 [2024-12-10 14:33:44.986240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:20.214 [2024-12-10 14:33:44.986252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.214 [2024-12-10 14:33:44.986262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:20.214 [2024-12-10 14:33:44.986276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:20.214 [2024-12-10 14:33:44.986286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:20.214 [2024-12-10 14:33:44.986312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:20.214 [2024-12-10 14:33:44.986347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:20.214 [2024-12-10 14:33:44.986378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:20.214 [2024-12-10 14:33:44.986412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:20.214 [2024-12-10 14:33:44.986444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:20.214 [2024-12-10 14:33:44.986481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.214 [2024-12-10 14:33:44.986502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:20.214 [2024-12-10 14:33:44.986511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:20.214 [2024-12-10 14:33:44.986525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.214 [2024-12-10 14:33:44.986535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:20.214 [2024-12-10 14:33:44.986548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:20.214 [2024-12-10 14:33:44.986557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:20.214 [2024-12-10 14:33:44.986578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:20.214 [2024-12-10 14:33:44.986591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:20.214 [2024-12-10 14:33:44.986614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:20.214 [2024-12-10 14:33:44.986624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.214 [2024-12-10 14:33:44.986649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:20.214 [2024-12-10 14:33:44.986664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:20.214 [2024-12-10 14:33:44.986674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:20.214 [2024-12-10 14:33:44.986697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:20.214 [2024-12-10 14:33:44.986708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:20.214 [2024-12-10 14:33:44.986720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:20.214 [2024-12-10 14:33:44.986732] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:20.214 [2024-12-10 14:33:44.986751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.214 [2024-12-10 14:33:44.986764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:20.214 [2024-12-10 14:33:44.986778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:20.214 [2024-12-10 14:33:44.986789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:20.214 [2024-12-10 14:33:44.986802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:20.214 [2024-12-10 14:33:44.986813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:20.214 [2024-12-10 14:33:44.986827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:20.214 [2024-12-10 14:33:44.986837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:20.214 [2024-12-10 14:33:44.986853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:20.214 [2024-12-10 14:33:44.986863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:20.214 [2024-12-10 14:33:44.986880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:20.214 [2024-12-10 14:33:44.986890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:20.214 [2024-12-10 14:33:44.986903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:20.214 [2024-12-10 14:33:44.986914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:20.214 [2024-12-10 14:33:44.986927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:20.215 [2024-12-10 14:33:44.986937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:20.215 [2024-12-10 14:33:44.986952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.215 [2024-12-10 14:33:44.986964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.215 [2024-12-10 14:33:44.986978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:20.215 [2024-12-10 14:33:44.986988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:20.215 [2024-12-10 14:33:44.987005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:20.215 [2024-12-10 14:33:44.987016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.215 [2024-12-10 14:33:44.987030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:20.215 [2024-12-10 14:33:44.987041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:28:20.215 [2024-12-10 14:33:44.987057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.215 [2024-12-10 14:33:44.987104] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:20.215 [2024-12-10 14:33:44.987124] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:24.415 [2024-12-10 14:33:48.536611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.536694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:24.415 [2024-12-10 14:33:48.536715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3555.267 ms 00:28:24.415 [2024-12-10 14:33:48.536731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.577072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.577130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.415 [2024-12-10 14:33:48.577148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.086 ms 00:28:24.415 [2024-12-10 14:33:48.577163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.577299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.577319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.415 [2024-12-10 14:33:48.577332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:24.415 [2024-12-10 14:33:48.577354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.623001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.623053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.415 [2024-12-10 14:33:48.623069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.676 ms 00:28:24.415 [2024-12-10 14:33:48.623084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.623121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.623142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.415 [2024-12-10 14:33:48.623155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:24.415 [2024-12-10 14:33:48.623183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.623667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.623716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.415 [2024-12-10 14:33:48.623729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:24.415 [2024-12-10 14:33:48.623743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.623841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.623858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.415 [2024-12-10 14:33:48.623874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:24.415 [2024-12-10 14:33:48.623891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.643693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.643741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.415 [2024-12-10 14:33:48.643757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.810 ms 00:28:24.415 [2024-12-10 14:33:48.643772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.678752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:24.415 [2024-12-10 14:33:48.682937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.682977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:24.415 [2024-12-10 14:33:48.683003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.120 ms 00:28:24.415 [2024-12-10 14:33:48.683020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.772092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.772133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:24.415 [2024-12-10 14:33:48.772153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.167 ms 00:28:24.415 [2024-12-10 14:33:48.772165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.772336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.772354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:24.415 [2024-12-10 14:33:48.772373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:28:24.415 [2024-12-10 14:33:48.772385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.806572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.806612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:24.415 [2024-12-10 14:33:48.806631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.181 ms 00:28:24.415 [2024-12-10 14:33:48.806644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.839625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.839665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:24.415 [2024-12-10 14:33:48.839693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.963 ms 00:28:24.415 [2024-12-10 14:33:48.839705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.840413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.840442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:24.415 [2024-12-10 14:33:48.840459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:28:24.415 [2024-12-10 14:33:48.840474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.937957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.937996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:24.415 [2024-12-10 14:33:48.938019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.577 ms 00:28:24.415 [2024-12-10 14:33:48.938032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:48.972763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:48.972803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:24.415 [2024-12-10 14:33:48.972822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.694 ms 00:28:24.415 [2024-12-10 14:33:48.972834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:49.006788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:49.006827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:24.415 [2024-12-10 14:33:49.006846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.957 ms 00:28:24.415 [2024-12-10 14:33:49.006857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:49.041236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:49.041275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:24.415 [2024-12-10 14:33:49.041293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.383 ms 00:28:24.415 [2024-12-10 14:33:49.041305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:49.041357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:49.041371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:24.415 [2024-12-10 14:33:49.041389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:24.415 [2024-12-10 14:33:49.041401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:49.041512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.415 [2024-12-10 14:33:49.041529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:24.415 [2024-12-10 14:33:49.041545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:24.415 [2024-12-10 14:33:49.041557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.415 [2024-12-10 14:33:49.042589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4088.568 ms, result 0 00:28:24.415 { 00:28:24.415 "name": "ftl0", 00:28:24.415 "uuid": "b799b7cb-f6a0-41cf-a325-8a1affd9b1f3" 00:28:24.415 } 00:28:24.415 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:24.415 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:24.674 /dev/nbd0 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:24.674 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:24.942 1+0 records in 00:28:24.942 1+0 records out 00:28:24.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036809 s, 11.1 MB/s 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:24.942 14:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:24.942 [2024-12-10 14:33:49.629433] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:28:24.942 [2024-12-10 14:33:49.629574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82783 ] 00:28:25.201 [2024-12-10 14:33:49.814135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.201 [2024-12-10 14:33:49.946010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.579  [2024-12-10T14:33:52.349Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-10T14:33:53.728Z] Copying: 403/1024 [MB] (202 MBps) [2024-12-10T14:33:54.666Z] Copying: 606/1024 [MB] (202 MBps) [2024-12-10T14:33:55.609Z] Copying: 805/1024 [MB] (199 MBps) [2024-12-10T14:33:55.609Z] Copying: 994/1024 [MB] (188 MBps) [2024-12-10T14:33:56.546Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:28:31.712 00:28:31.971 14:33:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:33.878 14:33:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:33.878 [2024-12-10 14:33:58.356427] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:28:33.878 [2024-12-10 14:33:58.356752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82877 ] 00:28:33.878 [2024-12-10 14:33:58.541065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.878 [2024-12-10 14:33:58.662088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.257  [2024-12-10T14:34:01.472Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-10T14:34:02.040Z] Copying: 32/1024 [MB] (16 MBps) [2024-12-10T14:34:03.419Z] Copying: 49/1024 [MB] (16 MBps) [2024-12-10T14:34:04.357Z] Copying: 65/1024 [MB] (16 MBps) [2024-12-10T14:34:05.302Z] Copying: 80/1024 [MB] (15 MBps) [2024-12-10T14:34:06.239Z] Copying: 96/1024 [MB] (15 MBps) [2024-12-10T14:34:07.209Z] Copying: 110/1024 [MB] (14 MBps) [2024-12-10T14:34:08.172Z] Copying: 126/1024 [MB] (15 MBps) [2024-12-10T14:34:09.109Z] Copying: 141/1024 [MB] (15 MBps) [2024-12-10T14:34:10.046Z] Copying: 156/1024 [MB] (15 MBps) [2024-12-10T14:34:11.425Z] Copying: 172/1024 [MB] (15 MBps) [2024-12-10T14:34:12.363Z] Copying: 188/1024 [MB] (15 MBps) [2024-12-10T14:34:13.300Z] Copying: 203/1024 [MB] (15 MBps) [2024-12-10T14:34:14.237Z] Copying: 219/1024 [MB] (15 MBps) [2024-12-10T14:34:15.175Z] Copying: 234/1024 [MB] (15 MBps) [2024-12-10T14:34:16.112Z] Copying: 249/1024 [MB] (15 MBps) [2024-12-10T14:34:17.049Z] Copying: 265/1024 [MB] (15 MBps) [2024-12-10T14:34:18.428Z] Copying: 280/1024 [MB] (15 MBps) [2024-12-10T14:34:19.365Z] Copying: 295/1024 [MB] (15 MBps) [2024-12-10T14:34:20.307Z] Copying: 311/1024 [MB] (15 MBps) [2024-12-10T14:34:21.244Z] Copying: 326/1024 [MB] (15 MBps) [2024-12-10T14:34:22.181Z] Copying: 342/1024 [MB] (15 MBps) [2024-12-10T14:34:23.117Z] Copying: 358/1024 [MB] (15 MBps) [2024-12-10T14:34:24.054Z] Copying: 374/1024 [MB] (15 MBps) [2024-12-10T14:34:25.431Z] Copying: 389/1024 [MB] (15 MBps) [2024-12-10T14:34:25.999Z] Copying: 404/1024 [MB] (15 MBps) [2024-12-10T14:34:27.387Z] Copying: 420/1024 [MB] (15 MBps) [2024-12-10T14:34:28.324Z] Copying: 435/1024 [MB] (15 MBps) [2024-12-10T14:34:29.261Z] Copying: 450/1024 [MB] (15 MBps) [2024-12-10T14:34:30.198Z] Copying: 465/1024 [MB] (15 MBps) [2024-12-10T14:34:31.136Z] Copying: 481/1024 [MB] (15 MBps) [2024-12-10T14:34:32.073Z] Copying: 497/1024 [MB] (15 MBps) [2024-12-10T14:34:33.016Z] Copying: 512/1024 [MB] (15 MBps) [2024-12-10T14:34:33.989Z] Copying: 528/1024 [MB] (15 MBps) [2024-12-10T14:34:35.366Z] Copying: 543/1024 [MB] (15 MBps) [2024-12-10T14:34:36.303Z] Copying: 559/1024 [MB] (15 MBps) [2024-12-10T14:34:37.241Z] Copying: 575/1024 [MB] (15 MBps) [2024-12-10T14:34:38.179Z] Copying: 591/1024 [MB] (15 MBps) [2024-12-10T14:34:39.116Z] Copying: 607/1024 [MB] (15 MBps) [2024-12-10T14:34:40.054Z] Copying: 623/1024 [MB] (15 MBps) [2024-12-10T14:34:40.991Z] Copying: 638/1024 [MB] (15 MBps) [2024-12-10T14:34:42.370Z] Copying: 654/1024 [MB] (15 MBps) [2024-12-10T14:34:43.307Z] Copying: 670/1024 [MB] (15 MBps) [2024-12-10T14:34:44.246Z] Copying: 686/1024 [MB] (15 MBps) [2024-12-10T14:34:45.182Z] Copying: 702/1024 [MB] (15 MBps) [2024-12-10T14:34:46.120Z] Copying: 717/1024 [MB] (15 MBps) [2024-12-10T14:34:47.057Z] Copying: 733/1024 [MB] (15 MBps) [2024-12-10T14:34:47.994Z] Copying: 749/1024 [MB] (15 MBps) [2024-12-10T14:34:49.372Z] Copying: 764/1024 [MB] (15 MBps) [2024-12-10T14:34:50.309Z] Copying: 780/1024 [MB] (15 MBps) [2024-12-10T14:34:51.246Z] Copying: 796/1024 [MB] (15 MBps) [2024-12-10T14:34:52.182Z] Copying: 812/1024 [MB] (15 MBps) [2024-12-10T14:34:53.120Z] Copying: 827/1024 [MB] (15 MBps) [2024-12-10T14:34:54.057Z] Copying: 843/1024 [MB] (15 MBps) [2024-12-10T14:34:54.994Z] Copying: 859/1024 [MB] (15 MBps) [2024-12-10T14:34:56.377Z] Copying: 874/1024 [MB] (15 MBps) [2024-12-10T14:34:57.314Z] Copying: 891/1024 [MB] (16 MBps) [2024-12-10T14:34:58.251Z] Copying: 907/1024 [MB] (16 MBps) [2024-12-10T14:34:59.209Z] Copying: 923/1024 [MB] (16 MBps) [2024-12-10T14:35:00.160Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-10T14:35:01.097Z] Copying: 955/1024 [MB] (15 MBps) [2024-12-10T14:35:02.034Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-10T14:35:02.971Z] Copying: 986/1024 [MB] (15 MBps) [2024-12-10T14:35:04.349Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-10T14:35:04.349Z] Copying: 1018/1024 [MB] (15 MBps) [2024-12-10T14:35:05.728Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:29:40.894 00:29:40.894 14:35:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:40.894 14:35:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:41.153 14:35:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:41.153 [2024-12-10 14:35:05.964718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.153 [2024-12-10 14:35:05.964930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:41.153 [2024-12-10 14:35:05.964958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:41.153 [2024-12-10 14:35:05.964974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.153 [2024-12-10 14:35:05.965013] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:41.153 [2024-12-10 14:35:05.968934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.153 [2024-12-10 14:35:05.968976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:41.153 [2024-12-10 14:35:05.968994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.893 ms 00:29:41.153 [2024-12-10 14:35:05.969006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.153 [2024-12-10 14:35:05.971096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.154 [2024-12-10 14:35:05.971142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:41.154 [2024-12-10 14:35:05.971160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:29:41.154 [2024-12-10 14:35:05.971172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:05.989397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:05.989441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:41.414 [2024-12-10 14:35:05.989459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.222 ms 00:29:41.414 [2024-12-10 14:35:05.989481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:05.994183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:05.994230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:41.414 [2024-12-10 14:35:05.994248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:29:41.414 [2024-12-10 14:35:05.994261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.029141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.029184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:41.414 [2024-12-10 14:35:06.029203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.844 ms 00:29:41.414 [2024-12-10 14:35:06.029216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.050554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.050599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:41.414 [2024-12-10 14:35:06.050622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.319 ms 00:29:41.414 [2024-12-10 14:35:06.050634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.050801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.050818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:41.414 [2024-12-10 14:35:06.050835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:29:41.414 [2024-12-10 14:35:06.050846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.085513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.085556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:41.414 [2024-12-10 14:35:06.085575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.697 ms 00:29:41.414 [2024-12-10 14:35:06.085589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.120036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.120239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:41.414 [2024-12-10 14:35:06.120270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.454 ms 00:29:41.414 [2024-12-10 14:35:06.120282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.153431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.153477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:41.414 [2024-12-10 14:35:06.153497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.131 ms 00:29:41.414 [2024-12-10 14:35:06.153509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.186444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.414 [2024-12-10 14:35:06.186485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:41.414 [2024-12-10 14:35:06.186505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.882 ms 00:29:41.414 [2024-12-10 14:35:06.186517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.414 [2024-12-10 14:35:06.186581] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:41.414 [2024-12-10 14:35:06.186604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.186994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.187980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.188976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:41.415 [2024-12-10 14:35:06.189514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:41.416 [2024-12-10 14:35:06.189641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:41.416 [2024-12-10 14:35:06.189658] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b799b7cb-f6a0-41cf-a325-8a1affd9b1f3 00:29:41.416 [2024-12-10 14:35:06.189682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:41.416 [2024-12-10 14:35:06.189701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:41.416 [2024-12-10 14:35:06.189717] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:41.416 [2024-12-10 14:35:06.189733] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:41.416 [2024-12-10 14:35:06.189745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:41.416 [2024-12-10 14:35:06.189761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:41.416 [2024-12-10 14:35:06.189773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:41.416 [2024-12-10 14:35:06.189788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:41.416 [2024-12-10 14:35:06.189799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:41.416 [2024-12-10 14:35:06.189816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.416 [2024-12-10 14:35:06.189828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:41.416 [2024-12-10 14:35:06.189845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:29:41.416 [2024-12-10 14:35:06.189857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.416 [2024-12-10 14:35:06.208494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.416 [2024-12-10 14:35:06.208647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:41.416 [2024-12-10 14:35:06.208698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.602 ms 00:29:41.416 [2024-12-10 14:35:06.208712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.416 [2024-12-10 14:35:06.209199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.416 [2024-12-10 14:35:06.209218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:41.416 [2024-12-10 14:35:06.209233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:29:41.416 [2024-12-10 14:35:06.209246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.269888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.270033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:41.676 [2024-12-10 14:35:06.270145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.270187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.270266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.270354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:41.676 [2024-12-10 14:35:06.270399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.270432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.270600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.270763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:41.676 [2024-12-10 14:35:06.270854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.270894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.271145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.271198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:41.676 [2024-12-10 14:35:06.271239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.271273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.386279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.386516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.676 [2024-12-10 14:35:06.386687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.386734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.482216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.482425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.676 [2024-12-10 14:35:06.482523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.482566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.482733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.482781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.676 [2024-12-10 14:35:06.483041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.483083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.483189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.483229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.676 [2024-12-10 14:35:06.483444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.483487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.483644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.483717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.676 [2024-12-10 14:35:06.483840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.483890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.483980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.484164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:41.676 [2024-12-10 14:35:06.484212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.484248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.484322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.484500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.676 [2024-12-10 14:35:06.484546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.484591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.484700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.676 [2024-12-10 14:35:06.484748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.676 [2024-12-10 14:35:06.484787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.676 [2024-12-10 14:35:06.484822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.676 [2024-12-10 14:35:06.485062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.140 ms, result 0 00:29:41.676 true 00:29:41.936 14:35:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82627 00:29:41.936 14:35:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82627 00:29:41.936 14:35:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:41.936 [2024-12-10 14:35:06.620271] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:29:41.936 [2024-12-10 14:35:06.620379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83563 ] 00:29:42.195 [2024-12-10 14:35:06.801519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.195 [2024-12-10 14:35:06.907111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.574  [2024-12-10T14:35:09.346Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-10T14:35:10.284Z] Copying: 402/1024 [MB] (200 MBps) [2024-12-10T14:35:11.222Z] Copying: 606/1024 [MB] (203 MBps) [2024-12-10T14:35:12.602Z] Copying: 807/1024 [MB] (200 MBps) [2024-12-10T14:35:12.602Z] Copying: 1005/1024 [MB] (198 MBps) [2024-12-10T14:35:13.540Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:29:48.706 00:29:48.706 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82627 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:48.706 14:35:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:48.706 [2024-12-10 14:35:13.485156] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:29:48.706 [2024-12-10 14:35:13.485301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83632 ] 00:29:48.965 [2024-12-10 14:35:13.683915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.965 [2024-12-10 14:35:13.788222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.534 [2024-12-10 14:35:14.132646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.534 [2024-12-10 14:35:14.132739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.534 [2024-12-10 14:35:14.199083] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:49.534 [2024-12-10 14:35:14.199425] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:49.534 [2024-12-10 14:35:14.199654] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:49.794 [2024-12-10 14:35:14.531315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.531365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:49.794 [2024-12-10 14:35:14.531382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:49.794 [2024-12-10 14:35:14.531398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.531449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.531462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.794 [2024-12-10 14:35:14.531474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:49.794 [2024-12-10 14:35:14.531485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.531510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:49.794 [2024-12-10 14:35:14.532431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:49.794 [2024-12-10 14:35:14.532467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.532479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.794 [2024-12-10 14:35:14.532491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:29:49.794 [2024-12-10 14:35:14.532501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.533980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:49.794 [2024-12-10 14:35:14.551412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.551684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:49.794 [2024-12-10 14:35:14.551709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.461 ms 00:29:49.794 [2024-12-10 14:35:14.551721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.551789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.551803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:49.794 [2024-12-10 14:35:14.551817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:49.794 [2024-12-10 14:35:14.551830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.558800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.558832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.794 [2024-12-10 14:35:14.558846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:29:49.794 [2024-12-10 14:35:14.558857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.558936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.558952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.794 [2024-12-10 14:35:14.558965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:49.794 [2024-12-10 14:35:14.558976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.559023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.559037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:49.794 [2024-12-10 14:35:14.559049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:49.794 [2024-12-10 14:35:14.559060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.794 [2024-12-10 14:35:14.559086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.794 [2024-12-10 14:35:14.563859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.794 [2024-12-10 14:35:14.563900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.794 [2024-12-10 14:35:14.563914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:29:49.795 [2024-12-10 14:35:14.563926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.795 [2024-12-10 14:35:14.563962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.795 [2024-12-10 14:35:14.563975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:49.795 [2024-12-10 14:35:14.563987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:49.795 [2024-12-10 14:35:14.563998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.795 [2024-12-10 14:35:14.564061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:49.795 [2024-12-10 14:35:14.564089] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:49.795 [2024-12-10 14:35:14.564124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:49.795 [2024-12-10 14:35:14.564143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:49.795 [2024-12-10 14:35:14.564229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:49.795 [2024-12-10 14:35:14.564244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:49.795 [2024-12-10 14:35:14.564259] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:49.795 [2024-12-10 14:35:14.564278] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564292] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564304] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:49.795 [2024-12-10 14:35:14.564315] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:49.795 [2024-12-10 14:35:14.564326] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:49.795 [2024-12-10 14:35:14.564339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:49.795 [2024-12-10 14:35:14.564351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.795 [2024-12-10 14:35:14.564363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:49.795 [2024-12-10 14:35:14.564374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:29:49.795 [2024-12-10 14:35:14.564385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.795 [2024-12-10 14:35:14.564458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.795 [2024-12-10 14:35:14.564475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:49.795 [2024-12-10 14:35:14.564487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:49.795 [2024-12-10 14:35:14.564498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.795 [2024-12-10 14:35:14.564591] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:49.795 [2024-12-10 14:35:14.564609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:49.795 [2024-12-10 14:35:14.564622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:49.795 [2024-12-10 14:35:14.564656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:49.795 [2024-12-10 14:35:14.564706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.795 [2024-12-10 14:35:14.564739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:49.795 [2024-12-10 14:35:14.564751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:49.795 [2024-12-10 14:35:14.564761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.795 [2024-12-10 14:35:14.564771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:49.795 [2024-12-10 14:35:14.564782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:49.795 [2024-12-10 14:35:14.564792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:49.795 [2024-12-10 14:35:14.564813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:49.795 [2024-12-10 14:35:14.564844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:49.795 [2024-12-10 14:35:14.564880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:49.795 [2024-12-10 14:35:14.564909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:49.795 [2024-12-10 14:35:14.564939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.795 [2024-12-10 14:35:14.564959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:49.795 [2024-12-10 14:35:14.564970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:49.795 [2024-12-10 14:35:14.564980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.795 [2024-12-10 14:35:14.564989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:49.795 [2024-12-10 14:35:14.564999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:49.795 [2024-12-10 14:35:14.565009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.795 [2024-12-10 14:35:14.565018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:49.795 [2024-12-10 14:35:14.565028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:49.795 [2024-12-10 14:35:14.565038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.565048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:49.795 [2024-12-10 14:35:14.565058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:49.795 [2024-12-10 14:35:14.565069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.565080] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:49.795 [2024-12-10 14:35:14.565091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:49.795 [2024-12-10 14:35:14.565106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.795 [2024-12-10 14:35:14.565117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.795 [2024-12-10 14:35:14.565128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:49.795 [2024-12-10 14:35:14.565138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:49.795 [2024-12-10 14:35:14.565149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:49.795 [2024-12-10 14:35:14.565160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:49.795 [2024-12-10 14:35:14.565170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:49.795 [2024-12-10 14:35:14.565181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:49.795 [2024-12-10 14:35:14.565192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:49.795 [2024-12-10 14:35:14.565205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:49.795 [2024-12-10 14:35:14.565229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:49.795 [2024-12-10 14:35:14.565241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:49.795 [2024-12-10 14:35:14.565252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:49.795 [2024-12-10 14:35:14.565263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:49.795 [2024-12-10 14:35:14.565275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:49.795 [2024-12-10 14:35:14.565286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:49.795 [2024-12-10 14:35:14.565297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:49.795 [2024-12-10 14:35:14.565308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:49.795 [2024-12-10 14:35:14.565318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:49.795 [2024-12-10 14:35:14.565375] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:49.795 [2024-12-10 14:35:14.565388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.795 [2024-12-10 14:35:14.565411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:49.795 [2024-12-10 14:35:14.565422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:49.796 [2024-12-10 14:35:14.565434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:49.796 [2024-12-10 14:35:14.565448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.796 [2024-12-10 14:35:14.565461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:49.796 [2024-12-10 14:35:14.565480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:29:49.796 [2024-12-10 14:35:14.565491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.796 [2024-12-10 14:35:14.603721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.796 [2024-12-10 14:35:14.603761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.796 [2024-12-10 14:35:14.603776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.240 ms 00:29:49.796 [2024-12-10 14:35:14.603789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.796 [2024-12-10 14:35:14.603869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.796 [2024-12-10 14:35:14.603882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:49.796 [2024-12-10 14:35:14.603895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:49.796 [2024-12-10 14:35:14.603906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.055 [2024-12-10 14:35:14.677298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.055 [2024-12-10 14:35:14.677336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:50.055 [2024-12-10 14:35:14.677357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.449 ms 00:29:50.055 [2024-12-10 14:35:14.677369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.055 [2024-12-10 14:35:14.677409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.055 [2024-12-10 14:35:14.677422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:50.055 [2024-12-10 14:35:14.677434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:50.055 [2024-12-10 14:35:14.677445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.055 [2024-12-10 14:35:14.677994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.055 [2024-12-10 14:35:14.678021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:50.055 [2024-12-10 14:35:14.678034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:29:50.055 [2024-12-10 14:35:14.678055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.055 [2024-12-10 14:35:14.678176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.055 [2024-12-10 14:35:14.678193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:50.055 [2024-12-10 14:35:14.678206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:29:50.055 [2024-12-10 14:35:14.678218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.055 [2024-12-10 14:35:14.695851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.055 [2024-12-10 14:35:14.695892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:50.056 [2024-12-10 14:35:14.695907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.638 ms 00:29:50.056 [2024-12-10 14:35:14.695918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.713710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:50.056 [2024-12-10 14:35:14.713754] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:50.056 [2024-12-10 14:35:14.713771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.713783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:50.056 [2024-12-10 14:35:14.713796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.772 ms 00:29:50.056 [2024-12-10 14:35:14.713807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.742380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.742426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:50.056 [2024-12-10 14:35:14.742442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.569 ms 00:29:50.056 [2024-12-10 14:35:14.742454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.759462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.759505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:50.056 [2024-12-10 14:35:14.759521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.989 ms 00:29:50.056 [2024-12-10 14:35:14.759532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.775954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.775996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:50.056 [2024-12-10 14:35:14.776012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:29:50.056 [2024-12-10 14:35:14.776022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.776754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.776783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:50.056 [2024-12-10 14:35:14.776797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:29:50.056 [2024-12-10 14:35:14.776809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.859962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.860025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:50.056 [2024-12-10 14:35:14.860043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.263 ms 00:29:50.056 [2024-12-10 14:35:14.860079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.870303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:50.056 [2024-12-10 14:35:14.872594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.872630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:50.056 [2024-12-10 14:35:14.872646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:29:50.056 [2024-12-10 14:35:14.872664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.872759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.872774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:50.056 [2024-12-10 14:35:14.872788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:50.056 [2024-12-10 14:35:14.872800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.872875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.872890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:50.056 [2024-12-10 14:35:14.872902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:50.056 [2024-12-10 14:35:14.872913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.872942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.872956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:50.056 [2024-12-10 14:35:14.872969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.056 [2024-12-10 14:35:14.872980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.056 [2024-12-10 14:35:14.873020] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:50.056 [2024-12-10 14:35:14.873033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.056 [2024-12-10 14:35:14.873045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:50.056 [2024-12-10 14:35:14.873057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:50.056 [2024-12-10 14:35:14.873073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.315 [2024-12-10 14:35:14.907689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.315 [2024-12-10 14:35:14.907863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:50.315 [2024-12-10 14:35:14.908010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.651 ms 00:29:50.315 [2024-12-10 14:35:14.908053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.315 [2024-12-10 14:35:14.908146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.315 [2024-12-10 14:35:14.908245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:50.315 [2024-12-10 14:35:14.908285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:50.315 [2024-12-10 14:35:14.908321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.315 [2024-12-10 14:35:14.909484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.331 ms, result 0 00:29:51.252  [2024-12-10T14:35:17.023Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-10T14:35:17.961Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-10T14:35:19.341Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-10T14:35:20.278Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-10T14:35:21.216Z] Copying: 117/1024 [MB] (23 MBps) [2024-12-10T14:35:22.154Z] Copying: 140/1024 [MB] (23 MBps) [2024-12-10T14:35:23.092Z] Copying: 164/1024 [MB] (23 MBps) [2024-12-10T14:35:24.053Z] Copying: 187/1024 [MB] (23 MBps) [2024-12-10T14:35:25.030Z] Copying: 210/1024 [MB] (23 MBps) [2024-12-10T14:35:25.976Z] Copying: 233/1024 [MB] (23 MBps) [2024-12-10T14:35:26.914Z] Copying: 257/1024 [MB] (23 MBps) [2024-12-10T14:35:28.292Z] Copying: 280/1024 [MB] (23 MBps) [2024-12-10T14:35:29.230Z] Copying: 304/1024 [MB] (23 MBps) [2024-12-10T14:35:30.165Z] Copying: 327/1024 [MB] (23 MBps) [2024-12-10T14:35:31.103Z] Copying: 351/1024 [MB] (23 MBps) [2024-12-10T14:35:32.041Z] Copying: 375/1024 [MB] (23 MBps) [2024-12-10T14:35:32.979Z] Copying: 398/1024 [MB] (23 MBps) [2024-12-10T14:35:33.918Z] Copying: 422/1024 [MB] (23 MBps) [2024-12-10T14:35:35.309Z] Copying: 445/1024 [MB] (23 MBps) [2024-12-10T14:35:36.246Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-10T14:35:37.183Z] Copying: 490/1024 [MB] (22 MBps) [2024-12-10T14:35:38.121Z] Copying: 512/1024 [MB] (22 MBps) [2024-12-10T14:35:39.058Z] Copying: 535/1024 [MB] (22 MBps) [2024-12-10T14:35:39.996Z] Copying: 557/1024 [MB] (22 MBps) [2024-12-10T14:35:40.934Z] Copying: 580/1024 [MB] (22 MBps) [2024-12-10T14:35:42.314Z] Copying: 603/1024 [MB] (22 MBps) [2024-12-10T14:35:42.882Z] Copying: 626/1024 [MB] (22 MBps) [2024-12-10T14:35:44.262Z] Copying: 649/1024 [MB] (23 MBps) [2024-12-10T14:35:45.208Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-10T14:35:46.147Z] Copying: 695/1024 [MB] (22 MBps) [2024-12-10T14:35:47.084Z] Copying: 718/1024 [MB] (23 MBps) [2024-12-10T14:35:48.022Z] Copying: 741/1024 [MB] (22 MBps) [2024-12-10T14:35:48.960Z] Copying: 764/1024 [MB] (22 MBps) [2024-12-10T14:35:49.897Z] Copying: 787/1024 [MB] (23 MBps) [2024-12-10T14:35:50.888Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-10T14:35:52.268Z] Copying: 833/1024 [MB] (23 MBps) [2024-12-10T14:35:53.206Z] Copying: 857/1024 [MB] (23 MBps) [2024-12-10T14:35:54.145Z] Copying: 879/1024 [MB] (22 MBps) [2024-12-10T14:35:55.082Z] Copying: 903/1024 [MB] (23 MBps) [2024-12-10T14:35:56.018Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-10T14:35:56.954Z] Copying: 950/1024 [MB] (23 MBps) [2024-12-10T14:35:57.891Z] Copying: 974/1024 [MB] (23 MBps) [2024-12-10T14:35:59.271Z] Copying: 997/1024 [MB] (23 MBps) [2024-12-10T14:35:59.839Z] Copying: 1021/1024 [MB] (23 MBps) [2024-12-10T14:35:59.839Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 14:35:59.687452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.005 [2024-12-10 14:35:59.687526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:35.005 [2024-12-10 14:35:59.687546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:35.005 [2024-12-10 14:35:59.687558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.005 [2024-12-10 14:35:59.689931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:35.005 [2024-12-10 14:35:59.697228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.005 [2024-12-10 14:35:59.697267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:35.005 [2024-12-10 14:35:59.697285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.119 ms 00:30:35.005 [2024-12-10 14:35:59.697305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.005 [2024-12-10 14:35:59.706468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.006 [2024-12-10 14:35:59.706508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:35.006 [2024-12-10 14:35:59.706522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.372 ms 00:30:35.006 [2024-12-10 14:35:59.706549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.006 [2024-12-10 14:35:59.730161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.006 [2024-12-10 14:35:59.730204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:35.006 [2024-12-10 14:35:59.730219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.632 ms 00:30:35.006 [2024-12-10 14:35:59.730231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.006 [2024-12-10 14:35:59.735208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.006 [2024-12-10 14:35:59.735261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:35.006 [2024-12-10 14:35:59.735274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:30:35.006 [2024-12-10 14:35:59.735284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.006 [2024-12-10 14:35:59.771121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.006 [2024-12-10 14:35:59.771159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:35.006 [2024-12-10 14:35:59.771173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.814 ms 00:30:35.006 [2024-12-10 14:35:59.771184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.006 [2024-12-10 14:35:59.791714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.006 [2024-12-10 14:35:59.791751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:35.006 [2024-12-10 14:35:59.791765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.524 ms 00:30:35.006 [2024-12-10 14:35:59.791792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:35:59.906093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.266 [2024-12-10 14:35:59.906262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:35.266 [2024-12-10 14:35:59.906293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.442 ms 00:30:35.266 [2024-12-10 14:35:59.906305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:35:59.941495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.266 [2024-12-10 14:35:59.941665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:35.266 [2024-12-10 14:35:59.941695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.223 ms 00:30:35.266 [2024-12-10 14:35:59.941722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:35:59.976224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.266 [2024-12-10 14:35:59.976260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:35.266 [2024-12-10 14:35:59.976273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.521 ms 00:30:35.266 [2024-12-10 14:35:59.976284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:36:00.010265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.266 [2024-12-10 14:36:00.010413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:35.266 [2024-12-10 14:36:00.010435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.998 ms 00:30:35.266 [2024-12-10 14:36:00.010446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:36:00.046991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.266 [2024-12-10 14:36:00.047040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:35.266 [2024-12-10 14:36:00.047054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.460 ms 00:30:35.266 [2024-12-10 14:36:00.047065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.266 [2024-12-10 14:36:00.047103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:35.266 [2024-12-10 14:36:00.047120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106752 / 261120 wr_cnt: 1 state: open 00:30:35.266 [2024-12-10 14:36:00.047134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:35.266 [2024-12-10 14:36:00.047383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.047999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:35.267 [2024-12-10 14:36:00.048550] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:35.267 [2024-12-10 14:36:00.048561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b799b7cb-f6a0-41cf-a325-8a1affd9b1f3 00:30:35.267 [2024-12-10 14:36:00.048600] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106752 00:30:35.267 [2024-12-10 14:36:00.048612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107712 00:30:35.267 [2024-12-10 14:36:00.048622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106752 00:30:35.267 [2024-12-10 14:36:00.048633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:30:35.267 [2024-12-10 14:36:00.048644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:35.267 [2024-12-10 14:36:00.048654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:35.267 [2024-12-10 14:36:00.048665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:35.267 [2024-12-10 14:36:00.048684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:35.267 [2024-12-10 14:36:00.048694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:35.267 [2024-12-10 14:36:00.048705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.267 [2024-12-10 14:36:00.048717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:35.267 [2024-12-10 14:36:00.048727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:30:35.267 [2024-12-10 14:36:00.048744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.267 [2024-12-10 14:36:00.069091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.267 [2024-12-10 14:36:00.069124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:35.267 [2024-12-10 14:36:00.069138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.329 ms 00:30:35.267 [2024-12-10 14:36:00.069166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.267 [2024-12-10 14:36:00.069803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.267 [2024-12-10 14:36:00.069820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:35.268 [2024-12-10 14:36:00.069839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:30:35.268 [2024-12-10 14:36:00.069849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.527 [2024-12-10 14:36:00.124432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.527 [2024-12-10 14:36:00.124467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:35.527 [2024-12-10 14:36:00.124480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.527 [2024-12-10 14:36:00.124508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.527 [2024-12-10 14:36:00.124571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.527 [2024-12-10 14:36:00.124584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:35.527 [2024-12-10 14:36:00.124600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.527 [2024-12-10 14:36:00.124611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.527 [2024-12-10 14:36:00.124680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.527 [2024-12-10 14:36:00.124710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:35.527 [2024-12-10 14:36:00.124721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.527 [2024-12-10 14:36:00.124732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.527 [2024-12-10 14:36:00.124751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.527 [2024-12-10 14:36:00.124762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:35.527 [2024-12-10 14:36:00.124773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.527 [2024-12-10 14:36:00.124784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.527 [2024-12-10 14:36:00.258947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.527 [2024-12-10 14:36:00.259157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:35.527 [2024-12-10 14:36:00.259183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.527 [2024-12-10 14:36:00.259197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.364448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.364506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.786 [2024-12-10 14:36:00.364524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.364543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.364655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.364684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:35.786 [2024-12-10 14:36:00.364697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.364708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.364781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.364795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:35.786 [2024-12-10 14:36:00.364806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.364817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.364961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.364976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:35.786 [2024-12-10 14:36:00.364989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.365000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.365039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.365053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:35.786 [2024-12-10 14:36:00.365064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.365074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.365127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.365140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:35.786 [2024-12-10 14:36:00.365151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.365162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.365213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.786 [2024-12-10 14:36:00.365226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:35.786 [2024-12-10 14:36:00.365237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.786 [2024-12-10 14:36:00.365248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.786 [2024-12-10 14:36:00.365399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.900 ms, result 0 00:30:37.693 00:30:37.693 00:30:37.693 14:36:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:39.070 14:36:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:39.070 [2024-12-10 14:36:03.836399] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:30:39.070 [2024-12-10 14:36:03.836532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84134 ] 00:30:39.329 [2024-12-10 14:36:04.020890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.329 [2024-12-10 14:36:04.154301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.897 [2024-12-10 14:36:04.563863] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:39.897 [2024-12-10 14:36:04.564113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:40.158 [2024-12-10 14:36:04.730531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.730593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:40.158 [2024-12-10 14:36:04.730611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:40.158 [2024-12-10 14:36:04.730622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.730692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.730709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:40.158 [2024-12-10 14:36:04.730722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:40.158 [2024-12-10 14:36:04.730732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.730756] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:40.158 [2024-12-10 14:36:04.731832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:40.158 [2024-12-10 14:36:04.731868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.731880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:40.158 [2024-12-10 14:36:04.731892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:30:40.158 [2024-12-10 14:36:04.731902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.734276] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:40.158 [2024-12-10 14:36:04.754405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.754445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:40.158 [2024-12-10 14:36:04.754460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.162 ms 00:30:40.158 [2024-12-10 14:36:04.754471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.754545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.754558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:40.158 [2024-12-10 14:36:04.754570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:40.158 [2024-12-10 14:36:04.754580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.766793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.766962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:40.158 [2024-12-10 14:36:04.767001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:30:40.158 [2024-12-10 14:36:04.767019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.767116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.767129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:40.158 [2024-12-10 14:36:04.767141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:40.158 [2024-12-10 14:36:04.767152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.767214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.767227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:40.158 [2024-12-10 14:36:04.767239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:40.158 [2024-12-10 14:36:04.767250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.767282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:40.158 [2024-12-10 14:36:04.772990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.773021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:40.158 [2024-12-10 14:36:04.773038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.725 ms 00:30:40.158 [2024-12-10 14:36:04.773048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.773082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.773093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:40.158 [2024-12-10 14:36:04.773104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:40.158 [2024-12-10 14:36:04.773115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.773151] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:40.158 [2024-12-10 14:36:04.773179] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:40.158 [2024-12-10 14:36:04.773225] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:40.158 [2024-12-10 14:36:04.773247] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:40.158 [2024-12-10 14:36:04.773333] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:40.158 [2024-12-10 14:36:04.773346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:40.158 [2024-12-10 14:36:04.773359] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:40.158 [2024-12-10 14:36:04.773372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:40.158 [2024-12-10 14:36:04.773384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:40.158 [2024-12-10 14:36:04.773396] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:40.158 [2024-12-10 14:36:04.773407] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:40.158 [2024-12-10 14:36:04.773420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:40.158 [2024-12-10 14:36:04.773430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:40.158 [2024-12-10 14:36:04.773441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.773451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:40.158 [2024-12-10 14:36:04.773461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:30:40.158 [2024-12-10 14:36:04.773471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.773565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.158 [2024-12-10 14:36:04.773577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:40.158 [2024-12-10 14:36:04.773588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:40.158 [2024-12-10 14:36:04.773598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.158 [2024-12-10 14:36:04.773712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:40.158 [2024-12-10 14:36:04.773745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:40.158 [2024-12-10 14:36:04.773758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:40.158 [2024-12-10 14:36:04.773768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.158 [2024-12-10 14:36:04.773780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:40.158 [2024-12-10 14:36:04.773790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:40.158 [2024-12-10 14:36:04.773800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:40.158 [2024-12-10 14:36:04.773826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:40.158 [2024-12-10 14:36:04.773837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:40.158 [2024-12-10 14:36:04.773846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:40.158 [2024-12-10 14:36:04.773858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:40.158 [2024-12-10 14:36:04.773867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:40.158 [2024-12-10 14:36:04.773877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:40.158 [2024-12-10 14:36:04.773898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:40.158 [2024-12-10 14:36:04.773909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:40.158 [2024-12-10 14:36:04.773919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.158 [2024-12-10 14:36:04.773929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:40.158 [2024-12-10 14:36:04.773938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:40.158 [2024-12-10 14:36:04.773949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.159 [2024-12-10 14:36:04.773958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:40.159 [2024-12-10 14:36:04.773967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:40.159 [2024-12-10 14:36:04.773977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.159 [2024-12-10 14:36:04.773986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:40.159 [2024-12-10 14:36:04.773995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.159 [2024-12-10 14:36:04.774014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:40.159 [2024-12-10 14:36:04.774023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.159 [2024-12-10 14:36:04.774041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:40.159 [2024-12-10 14:36:04.774051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:40.159 [2024-12-10 14:36:04.774069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:40.159 [2024-12-10 14:36:04.774078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:40.159 [2024-12-10 14:36:04.774096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:40.159 [2024-12-10 14:36:04.774105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:40.159 [2024-12-10 14:36:04.774114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:40.159 [2024-12-10 14:36:04.774123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:40.159 [2024-12-10 14:36:04.774132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:40.159 [2024-12-10 14:36:04.774141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:40.159 [2024-12-10 14:36:04.774159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:40.159 [2024-12-10 14:36:04.774177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774186] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:40.159 [2024-12-10 14:36:04.774198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:40.159 [2024-12-10 14:36:04.774208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:40.159 [2024-12-10 14:36:04.774219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:40.159 [2024-12-10 14:36:04.774229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:40.159 [2024-12-10 14:36:04.774240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:40.159 [2024-12-10 14:36:04.774249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:40.159 [2024-12-10 14:36:04.774259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:40.159 [2024-12-10 14:36:04.774268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:40.159 [2024-12-10 14:36:04.774278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:40.159 [2024-12-10 14:36:04.774289] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:40.159 [2024-12-10 14:36:04.774302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:40.159 [2024-12-10 14:36:04.774331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:40.159 [2024-12-10 14:36:04.774342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:40.159 [2024-12-10 14:36:04.774353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:40.159 [2024-12-10 14:36:04.774364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:40.159 [2024-12-10 14:36:04.774375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:40.159 [2024-12-10 14:36:04.774386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:40.159 [2024-12-10 14:36:04.774396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:40.159 [2024-12-10 14:36:04.774407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:40.159 [2024-12-10 14:36:04.774418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:40.159 [2024-12-10 14:36:04.774471] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:40.159 [2024-12-10 14:36:04.774483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:40.159 [2024-12-10 14:36:04.774505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:40.159 [2024-12-10 14:36:04.774515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:40.159 [2024-12-10 14:36:04.774528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:40.159 [2024-12-10 14:36:04.774539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.774550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:40.159 [2024-12-10 14:36:04.774561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:30:40.159 [2024-12-10 14:36:04.774571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.823494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.823530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:40.159 [2024-12-10 14:36:04.823545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.950 ms 00:30:40.159 [2024-12-10 14:36:04.823578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.823657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.823668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:40.159 [2024-12-10 14:36:04.823679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:40.159 [2024-12-10 14:36:04.823703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.904992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.905031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:40.159 [2024-12-10 14:36:04.905046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.334 ms 00:30:40.159 [2024-12-10 14:36:04.905058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.905104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.905121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:40.159 [2024-12-10 14:36:04.905133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:40.159 [2024-12-10 14:36:04.905144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.906140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.906282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:40.159 [2024-12-10 14:36:04.906356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:30:40.159 [2024-12-10 14:36:04.906392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.906559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.906598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:40.159 [2024-12-10 14:36:04.906689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:40.159 [2024-12-10 14:36:04.906728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.929498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.929678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:40.159 [2024-12-10 14:36:04.929801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.756 ms 00:30:40.159 [2024-12-10 14:36:04.929842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.949910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:40.159 [2024-12-10 14:36:04.950053] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:40.159 [2024-12-10 14:36:04.950075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.950088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:40.159 [2024-12-10 14:36:04.950100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.105 ms 00:30:40.159 [2024-12-10 14:36:04.950111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.159 [2024-12-10 14:36:04.979417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.159 [2024-12-10 14:36:04.979563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:40.159 [2024-12-10 14:36:04.979584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.310 ms 00:30:40.159 [2024-12-10 14:36:04.979612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:04.997860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:04.997897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:40.419 [2024-12-10 14:36:04.997911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.154 ms 00:30:40.419 [2024-12-10 14:36:04.997922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.015580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.015614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:40.419 [2024-12-10 14:36:05.015628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:30:40.419 [2024-12-10 14:36:05.015638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.016413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.016448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:40.419 [2024-12-10 14:36:05.016467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:30:40.419 [2024-12-10 14:36:05.016492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.108477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.108554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:40.419 [2024-12-10 14:36:05.108579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.107 ms 00:30:40.419 [2024-12-10 14:36:05.108590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.119103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:40.419 [2024-12-10 14:36:05.122269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.122299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:40.419 [2024-12-10 14:36:05.122314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.647 ms 00:30:40.419 [2024-12-10 14:36:05.122341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.122428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.122442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:40.419 [2024-12-10 14:36:05.122459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:40.419 [2024-12-10 14:36:05.122469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.124643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.124694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:40.419 [2024-12-10 14:36:05.124709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:30:40.419 [2024-12-10 14:36:05.124720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.124752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.124765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:40.419 [2024-12-10 14:36:05.124776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:40.419 [2024-12-10 14:36:05.124787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.124839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:40.419 [2024-12-10 14:36:05.124854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.124866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:40.419 [2024-12-10 14:36:05.124878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:40.419 [2024-12-10 14:36:05.124889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.160426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.160464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:40.419 [2024-12-10 14:36:05.160485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.573 ms 00:30:40.419 [2024-12-10 14:36:05.160496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.160578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.419 [2024-12-10 14:36:05.160590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:40.419 [2024-12-10 14:36:05.160601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:40.419 [2024-12-10 14:36:05.160611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.419 [2024-12-10 14:36:05.162211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.823 ms, result 0 00:30:41.797  [2024-12-10T14:36:07.568Z] Copying: 1236/1048576 [kB] (1236 kBps) [2024-12-10T14:36:08.503Z] Copying: 9068/1048576 [kB] (7832 kBps) [2024-12-10T14:36:09.440Z] Copying: 40/1024 [MB] (31 MBps) [2024-12-10T14:36:10.377Z] Copying: 72/1024 [MB] (31 MBps) [2024-12-10T14:36:11.756Z] Copying: 104/1024 [MB] (31 MBps) [2024-12-10T14:36:12.693Z] Copying: 137/1024 [MB] (32 MBps) [2024-12-10T14:36:13.629Z] Copying: 169/1024 [MB] (32 MBps) [2024-12-10T14:36:14.567Z] Copying: 202/1024 [MB] (32 MBps) [2024-12-10T14:36:15.504Z] Copying: 235/1024 [MB] (32 MBps) [2024-12-10T14:36:16.488Z] Copying: 268/1024 [MB] (33 MBps) [2024-12-10T14:36:17.425Z] Copying: 301/1024 [MB] (33 MBps) [2024-12-10T14:36:18.362Z] Copying: 334/1024 [MB] (32 MBps) [2024-12-10T14:36:19.741Z] Copying: 367/1024 [MB] (32 MBps) [2024-12-10T14:36:20.681Z] Copying: 400/1024 [MB] (32 MBps) [2024-12-10T14:36:21.617Z] Copying: 432/1024 [MB] (32 MBps) [2024-12-10T14:36:22.555Z] Copying: 464/1024 [MB] (32 MBps) [2024-12-10T14:36:23.492Z] Copying: 496/1024 [MB] (32 MBps) [2024-12-10T14:36:24.429Z] Copying: 528/1024 [MB] (32 MBps) [2024-12-10T14:36:25.366Z] Copying: 560/1024 [MB] (31 MBps) [2024-12-10T14:36:26.743Z] Copying: 592/1024 [MB] (31 MBps) [2024-12-10T14:36:27.680Z] Copying: 624/1024 [MB] (32 MBps) [2024-12-10T14:36:28.616Z] Copying: 656/1024 [MB] (31 MBps) [2024-12-10T14:36:29.551Z] Copying: 688/1024 [MB] (31 MBps) [2024-12-10T14:36:30.487Z] Copying: 720/1024 [MB] (31 MBps) [2024-12-10T14:36:31.424Z] Copying: 752/1024 [MB] (32 MBps) [2024-12-10T14:36:32.362Z] Copying: 784/1024 [MB] (31 MBps) [2024-12-10T14:36:33.740Z] Copying: 817/1024 [MB] (32 MBps) [2024-12-10T14:36:34.681Z] Copying: 849/1024 [MB] (32 MBps) [2024-12-10T14:36:35.623Z] Copying: 881/1024 [MB] (32 MBps) [2024-12-10T14:36:36.559Z] Copying: 913/1024 [MB] (32 MBps) [2024-12-10T14:36:37.496Z] Copying: 945/1024 [MB] (31 MBps) [2024-12-10T14:36:38.433Z] Copying: 977/1024 [MB] (32 MBps) [2024-12-10T14:36:39.002Z] Copying: 1009/1024 [MB] (32 MBps) [2024-12-10T14:36:39.002Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-10 14:36:38.811810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.811885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:14.168 [2024-12-10 14:36:38.811907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:14.168 [2024-12-10 14:36:38.811920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.811949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:14.168 [2024-12-10 14:36:38.817807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.818003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:14.168 [2024-12-10 14:36:38.818039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.845 ms 00:31:14.168 [2024-12-10 14:36:38.818056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.818350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.818376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:14.168 [2024-12-10 14:36:38.818393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:31:14.168 [2024-12-10 14:36:38.818407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.831516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.831560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:14.168 [2024-12-10 14:36:38.831576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:31:14.168 [2024-12-10 14:36:38.831605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.836650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.836694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:14.168 [2024-12-10 14:36:38.836716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:31:14.168 [2024-12-10 14:36:38.836726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.873659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.873700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:14.168 [2024-12-10 14:36:38.873714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.905 ms 00:31:14.168 [2024-12-10 14:36:38.873742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.894280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.894420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:14.168 [2024-12-10 14:36:38.894443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.520 ms 00:31:14.168 [2024-12-10 14:36:38.894471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.896686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.896722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:14.168 [2024-12-10 14:36:38.896736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:31:14.168 [2024-12-10 14:36:38.896755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.931545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.931580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:14.168 [2024-12-10 14:36:38.931593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.828 ms 00:31:14.168 [2024-12-10 14:36:38.931603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.168 [2024-12-10 14:36:38.966692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.168 [2024-12-10 14:36:38.966731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:14.168 [2024-12-10 14:36:38.966744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.107 ms 00:31:14.168 [2024-12-10 14:36:38.966754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.428 [2024-12-10 14:36:39.001698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.429 [2024-12-10 14:36:39.001858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:14.429 [2024-12-10 14:36:39.001879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.961 ms 00:31:14.429 [2024-12-10 14:36:39.001889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.429 [2024-12-10 14:36:39.035996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.429 [2024-12-10 14:36:39.036030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:14.429 [2024-12-10 14:36:39.036042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.038 ms 00:31:14.429 [2024-12-10 14:36:39.036052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.429 [2024-12-10 14:36:39.036089] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:14.429 [2024-12-10 14:36:39.036106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:14.429 [2024-12-10 14:36:39.036120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:14.429 [2024-12-10 14:36:39.036132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.036990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.037001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.037012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.037022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.037032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:14.429 [2024-12-10 14:36:39.037043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:14.430 [2024-12-10 14:36:39.037261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:14.430 [2024-12-10 14:36:39.037271] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b799b7cb-f6a0-41cf-a325-8a1affd9b1f3 00:31:14.430 [2024-12-10 14:36:39.037283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:14.430 [2024-12-10 14:36:39.037293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157888 00:31:14.430 [2024-12-10 14:36:39.037308] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155904 00:31:14.430 [2024-12-10 14:36:39.037319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:31:14.430 [2024-12-10 14:36:39.037329] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:14.430 [2024-12-10 14:36:39.037351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:14.430 [2024-12-10 14:36:39.037362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:14.430 [2024-12-10 14:36:39.037371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:14.430 [2024-12-10 14:36:39.037381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:14.430 [2024-12-10 14:36:39.037391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.430 [2024-12-10 14:36:39.037402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:14.430 [2024-12-10 14:36:39.037413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:31:14.430 [2024-12-10 14:36:39.037424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.057544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.430 [2024-12-10 14:36:39.057577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:14.430 [2024-12-10 14:36:39.057590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.118 ms 00:31:14.430 [2024-12-10 14:36:39.057601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.058244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.430 [2024-12-10 14:36:39.058265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:14.430 [2024-12-10 14:36:39.058278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:31:14.430 [2024-12-10 14:36:39.058289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.109643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.430 [2024-12-10 14:36:39.109697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:14.430 [2024-12-10 14:36:39.109712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.430 [2024-12-10 14:36:39.109723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.109786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.430 [2024-12-10 14:36:39.109798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:14.430 [2024-12-10 14:36:39.109809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.430 [2024-12-10 14:36:39.109819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.109930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.430 [2024-12-10 14:36:39.109945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:14.430 [2024-12-10 14:36:39.109956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.430 [2024-12-10 14:36:39.109967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.109986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.430 [2024-12-10 14:36:39.109997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:14.430 [2024-12-10 14:36:39.110008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.430 [2024-12-10 14:36:39.110018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.430 [2024-12-10 14:36:39.236192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.430 [2024-12-10 14:36:39.236248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:14.430 [2024-12-10 14:36:39.236264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.430 [2024-12-10 14:36:39.236275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.689 [2024-12-10 14:36:39.337009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.689 [2024-12-10 14:36:39.337236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.689 [2024-12-10 14:36:39.337261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.689 [2024-12-10 14:36:39.337273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.689 [2024-12-10 14:36:39.337394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.689 [2024-12-10 14:36:39.337413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.689 [2024-12-10 14:36:39.337424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.689 [2024-12-10 14:36:39.337435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.689 [2024-12-10 14:36:39.337487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.689 [2024-12-10 14:36:39.337509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.689 [2024-12-10 14:36:39.337521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.689 [2024-12-10 14:36:39.337532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.690 [2024-12-10 14:36:39.337695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.690 [2024-12-10 14:36:39.337741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.690 [2024-12-10 14:36:39.337759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.690 [2024-12-10 14:36:39.337770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.690 [2024-12-10 14:36:39.337821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.690 [2024-12-10 14:36:39.337835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:14.690 [2024-12-10 14:36:39.337846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.690 [2024-12-10 14:36:39.337857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.690 [2024-12-10 14:36:39.337904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.690 [2024-12-10 14:36:39.337917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.690 [2024-12-10 14:36:39.337933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.690 [2024-12-10 14:36:39.337943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.690 [2024-12-10 14:36:39.337993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.690 [2024-12-10 14:36:39.338007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.690 [2024-12-10 14:36:39.338018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.690 [2024-12-10 14:36:39.338029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.690 [2024-12-10 14:36:39.338182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.222 ms, result 0 00:31:16.069 00:31:16.069 00:31:16.069 14:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:17.498 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:17.498 14:36:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.498 [2024-12-10 14:36:42.278710] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:31:17.498 [2024-12-10 14:36:42.279040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84515 ] 00:31:17.757 [2024-12-10 14:36:42.462410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.016 [2024-12-10 14:36:42.602252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.275 [2024-12-10 14:36:43.010874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.275 [2024-12-10 14:36:43.010952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.536 [2024-12-10 14:36:43.176528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.176582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:18.536 [2024-12-10 14:36:43.176599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:18.536 [2024-12-10 14:36:43.176610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.176658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.176694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.536 [2024-12-10 14:36:43.176705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:18.536 [2024-12-10 14:36:43.176715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.176738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:18.536 [2024-12-10 14:36:43.177783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:18.536 [2024-12-10 14:36:43.177813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.177826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.536 [2024-12-10 14:36:43.177838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:31:18.536 [2024-12-10 14:36:43.177849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.180167] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:18.536 [2024-12-10 14:36:43.199290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.199325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:18.536 [2024-12-10 14:36:43.199340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.155 ms 00:31:18.536 [2024-12-10 14:36:43.199350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.199417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.199430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:18.536 [2024-12-10 14:36:43.199441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:18.536 [2024-12-10 14:36:43.199451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.211430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.211456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.536 [2024-12-10 14:36:43.211470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.928 ms 00:31:18.536 [2024-12-10 14:36:43.211485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.211570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.211584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.536 [2024-12-10 14:36:43.211595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:18.536 [2024-12-10 14:36:43.211605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.211659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.211689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:18.536 [2024-12-10 14:36:43.211700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:18.536 [2024-12-10 14:36:43.211710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.211741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:18.536 [2024-12-10 14:36:43.217244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.217274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.536 [2024-12-10 14:36:43.217290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.518 ms 00:31:18.536 [2024-12-10 14:36:43.217299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.217332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.217344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:18.536 [2024-12-10 14:36:43.217355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:18.536 [2024-12-10 14:36:43.217365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.217400] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:18.536 [2024-12-10 14:36:43.217430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:18.536 [2024-12-10 14:36:43.217465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:18.536 [2024-12-10 14:36:43.217486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:18.536 [2024-12-10 14:36:43.217583] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:18.536 [2024-12-10 14:36:43.217596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:18.536 [2024-12-10 14:36:43.217610] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:18.536 [2024-12-10 14:36:43.217622] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:18.536 [2024-12-10 14:36:43.217634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:18.536 [2024-12-10 14:36:43.217645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:18.536 [2024-12-10 14:36:43.217655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:18.536 [2024-12-10 14:36:43.217687] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:18.536 [2024-12-10 14:36:43.217697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:18.536 [2024-12-10 14:36:43.217708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.217719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:18.536 [2024-12-10 14:36:43.217730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:18.536 [2024-12-10 14:36:43.217740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.217819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.536 [2024-12-10 14:36:43.217831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:18.536 [2024-12-10 14:36:43.217842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:18.536 [2024-12-10 14:36:43.217867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.536 [2024-12-10 14:36:43.217967] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:18.536 [2024-12-10 14:36:43.217983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:18.536 [2024-12-10 14:36:43.217994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.536 [2024-12-10 14:36:43.218005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:18.537 [2024-12-10 14:36:43.218026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:18.537 [2024-12-10 14:36:43.218055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.537 [2024-12-10 14:36:43.218076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:18.537 [2024-12-10 14:36:43.218086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:18.537 [2024-12-10 14:36:43.218095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.537 [2024-12-10 14:36:43.218114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:18.537 [2024-12-10 14:36:43.218123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:18.537 [2024-12-10 14:36:43.218133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:18.537 [2024-12-10 14:36:43.218152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:18.537 [2024-12-10 14:36:43.218180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:18.537 [2024-12-10 14:36:43.218225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:18.537 [2024-12-10 14:36:43.218252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:18.537 [2024-12-10 14:36:43.218279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:18.537 [2024-12-10 14:36:43.218306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.537 [2024-12-10 14:36:43.218324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:18.537 [2024-12-10 14:36:43.218333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:18.537 [2024-12-10 14:36:43.218342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.537 [2024-12-10 14:36:43.218351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:18.537 [2024-12-10 14:36:43.218360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:18.537 [2024-12-10 14:36:43.218369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:18.537 [2024-12-10 14:36:43.218389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:18.537 [2024-12-10 14:36:43.218399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218408] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:18.537 [2024-12-10 14:36:43.218418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:18.537 [2024-12-10 14:36:43.218429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.537 [2024-12-10 14:36:43.218449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:18.537 [2024-12-10 14:36:43.218458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:18.537 [2024-12-10 14:36:43.218467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:18.537 [2024-12-10 14:36:43.218477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:18.537 [2024-12-10 14:36:43.218486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:18.537 [2024-12-10 14:36:43.218496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:18.537 [2024-12-10 14:36:43.218506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:18.537 [2024-12-10 14:36:43.218519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:18.537 [2024-12-10 14:36:43.218547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:18.537 [2024-12-10 14:36:43.218558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:18.537 [2024-12-10 14:36:43.218569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:18.537 [2024-12-10 14:36:43.218579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:18.537 [2024-12-10 14:36:43.218590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:18.537 [2024-12-10 14:36:43.218601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:18.537 [2024-12-10 14:36:43.218611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:18.537 [2024-12-10 14:36:43.218624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:18.537 [2024-12-10 14:36:43.218634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:18.537 [2024-12-10 14:36:43.218686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:18.537 [2024-12-10 14:36:43.218698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.537 [2024-12-10 14:36:43.218733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:18.537 [2024-12-10 14:36:43.218748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:18.537 [2024-12-10 14:36:43.218759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:18.537 [2024-12-10 14:36:43.218770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.218782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:18.537 [2024-12-10 14:36:43.218793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:31:18.537 [2024-12-10 14:36:43.218803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.537 [2024-12-10 14:36:43.265193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.265371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.537 [2024-12-10 14:36:43.265392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.410 ms 00:31:18.537 [2024-12-10 14:36:43.265413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.537 [2024-12-10 14:36:43.265493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.265514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:18.537 [2024-12-10 14:36:43.265525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:18.537 [2024-12-10 14:36:43.265536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.537 [2024-12-10 14:36:43.331236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.331269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.537 [2024-12-10 14:36:43.331283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.726 ms 00:31:18.537 [2024-12-10 14:36:43.331294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.537 [2024-12-10 14:36:43.331329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.331340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.537 [2024-12-10 14:36:43.331355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:18.537 [2024-12-10 14:36:43.331365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.537 [2024-12-10 14:36:43.332173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.537 [2024-12-10 14:36:43.332206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.537 [2024-12-10 14:36:43.332219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:31:18.537 [2024-12-10 14:36:43.332229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.538 [2024-12-10 14:36:43.332365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.538 [2024-12-10 14:36:43.332384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.538 [2024-12-10 14:36:43.332403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:31:18.538 [2024-12-10 14:36:43.332414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.538 [2024-12-10 14:36:43.356291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.538 [2024-12-10 14:36:43.356449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.538 [2024-12-10 14:36:43.356554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.892 ms 00:31:18.538 [2024-12-10 14:36:43.356593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.377241] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:18.797 [2024-12-10 14:36:43.377370] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:18.797 [2024-12-10 14:36:43.377406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.377418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:18.797 [2024-12-10 14:36:43.377430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.708 ms 00:31:18.797 [2024-12-10 14:36:43.377440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.405932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.405968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:18.797 [2024-12-10 14:36:43.405982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.402 ms 00:31:18.797 [2024-12-10 14:36:43.405993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.423161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.423195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:18.797 [2024-12-10 14:36:43.423208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.136 ms 00:31:18.797 [2024-12-10 14:36:43.423218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.439567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.439599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:18.797 [2024-12-10 14:36:43.439612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.332 ms 00:31:18.797 [2024-12-10 14:36:43.439622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.440361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.440393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:18.797 [2024-12-10 14:36:43.440410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:31:18.797 [2024-12-10 14:36:43.440421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.797 [2024-12-10 14:36:43.530088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.797 [2024-12-10 14:36:43.530144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:18.798 [2024-12-10 14:36:43.530167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.789 ms 00:31:18.798 [2024-12-10 14:36:43.530179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.541397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:18.798 [2024-12-10 14:36:43.544931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.544963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:18.798 [2024-12-10 14:36:43.544978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.725 ms 00:31:18.798 [2024-12-10 14:36:43.544989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.545073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.545088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:18.798 [2024-12-10 14:36:43.545104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:18.798 [2024-12-10 14:36:43.545115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.546482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.546510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:18.798 [2024-12-10 14:36:43.546522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:31:18.798 [2024-12-10 14:36:43.546532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.546565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.546578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:18.798 [2024-12-10 14:36:43.546589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:18.798 [2024-12-10 14:36:43.546601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.546649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:18.798 [2024-12-10 14:36:43.546663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.546687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:18.798 [2024-12-10 14:36:43.546699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:18.798 [2024-12-10 14:36:43.546710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.583656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.583702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:18.798 [2024-12-10 14:36:43.583724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.983 ms 00:31:18.798 [2024-12-10 14:36:43.583736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.583811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.798 [2024-12-10 14:36:43.583823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:18.798 [2024-12-10 14:36:43.583835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:18.798 [2024-12-10 14:36:43.583846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.798 [2024-12-10 14:36:43.585573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.165 ms, result 0 00:31:20.176  [2024-12-10T14:36:45.957Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-10T14:36:46.894Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-10T14:36:47.831Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-10T14:36:49.209Z] Copying: 98/1024 [MB] (25 MBps) [2024-12-10T14:36:50.146Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-10T14:36:51.083Z] Copying: 149/1024 [MB] (25 MBps) [2024-12-10T14:36:52.019Z] Copying: 175/1024 [MB] (26 MBps) [2024-12-10T14:36:52.955Z] Copying: 201/1024 [MB] (25 MBps) [2024-12-10T14:36:53.893Z] Copying: 226/1024 [MB] (25 MBps) [2024-12-10T14:36:54.829Z] Copying: 251/1024 [MB] (24 MBps) [2024-12-10T14:36:56.206Z] Copying: 275/1024 [MB] (24 MBps) [2024-12-10T14:36:57.144Z] Copying: 298/1024 [MB] (23 MBps) [2024-12-10T14:36:58.081Z] Copying: 322/1024 [MB] (23 MBps) [2024-12-10T14:36:59.018Z] Copying: 346/1024 [MB] (23 MBps) [2024-12-10T14:36:59.954Z] Copying: 369/1024 [MB] (23 MBps) [2024-12-10T14:37:00.891Z] Copying: 392/1024 [MB] (22 MBps) [2024-12-10T14:37:01.828Z] Copying: 416/1024 [MB] (23 MBps) [2024-12-10T14:37:03.205Z] Copying: 440/1024 [MB] (23 MBps) [2024-12-10T14:37:03.773Z] Copying: 463/1024 [MB] (23 MBps) [2024-12-10T14:37:05.153Z] Copying: 487/1024 [MB] (23 MBps) [2024-12-10T14:37:06.089Z] Copying: 511/1024 [MB] (23 MBps) [2024-12-10T14:37:07.028Z] Copying: 534/1024 [MB] (23 MBps) [2024-12-10T14:37:08.005Z] Copying: 557/1024 [MB] (23 MBps) [2024-12-10T14:37:08.943Z] Copying: 581/1024 [MB] (23 MBps) [2024-12-10T14:37:09.880Z] Copying: 604/1024 [MB] (23 MBps) [2024-12-10T14:37:10.817Z] Copying: 628/1024 [MB] (23 MBps) [2024-12-10T14:37:12.195Z] Copying: 652/1024 [MB] (24 MBps) [2024-12-10T14:37:12.764Z] Copying: 677/1024 [MB] (24 MBps) [2024-12-10T14:37:14.143Z] Copying: 702/1024 [MB] (25 MBps) [2024-12-10T14:37:15.082Z] Copying: 727/1024 [MB] (24 MBps) [2024-12-10T14:37:16.029Z] Copying: 750/1024 [MB] (23 MBps) [2024-12-10T14:37:16.968Z] Copying: 774/1024 [MB] (24 MBps) [2024-12-10T14:37:17.906Z] Copying: 798/1024 [MB] (24 MBps) [2024-12-10T14:37:18.845Z] Copying: 823/1024 [MB] (24 MBps) [2024-12-10T14:37:19.783Z] Copying: 846/1024 [MB] (23 MBps) [2024-12-10T14:37:21.161Z] Copying: 870/1024 [MB] (23 MBps) [2024-12-10T14:37:22.097Z] Copying: 895/1024 [MB] (24 MBps) [2024-12-10T14:37:23.034Z] Copying: 919/1024 [MB] (24 MBps) [2024-12-10T14:37:23.971Z] Copying: 944/1024 [MB] (24 MBps) [2024-12-10T14:37:24.908Z] Copying: 968/1024 [MB] (23 MBps) [2024-12-10T14:37:25.845Z] Copying: 992/1024 [MB] (24 MBps) [2024-12-10T14:37:26.104Z] Copying: 1016/1024 [MB] (24 MBps) [2024-12-10T14:37:26.363Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 14:37:26.293306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.529 [2024-12-10 14:37:26.293401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:01.529 [2024-12-10 14:37:26.293432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:01.529 [2024-12-10 14:37:26.293455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.529 [2024-12-10 14:37:26.293502] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:01.529 [2024-12-10 14:37:26.302213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.529 [2024-12-10 14:37:26.302298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:01.529 [2024-12-10 14:37:26.302325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.675 ms 00:32:01.530 [2024-12-10 14:37:26.302347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.530 [2024-12-10 14:37:26.302742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.530 [2024-12-10 14:37:26.302790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:01.530 [2024-12-10 14:37:26.302813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:32:01.530 [2024-12-10 14:37:26.302836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.530 [2024-12-10 14:37:26.306177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.530 [2024-12-10 14:37:26.306214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:01.530 [2024-12-10 14:37:26.306230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:32:01.530 [2024-12-10 14:37:26.306253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.530 [2024-12-10 14:37:26.312328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.530 [2024-12-10 14:37:26.312383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:01.530 [2024-12-10 14:37:26.312401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.056 ms 00:32:01.530 [2024-12-10 14:37:26.312416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.530 [2024-12-10 14:37:26.347868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.530 [2024-12-10 14:37:26.347921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:01.530 [2024-12-10 14:37:26.347938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.418 ms 00:32:01.530 [2024-12-10 14:37:26.347949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.384119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.384180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:01.790 [2024-12-10 14:37:26.384200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.178 ms 00:32:01.790 [2024-12-10 14:37:26.384212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.386461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.386509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:01.790 [2024-12-10 14:37:26.386525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:32:01.790 [2024-12-10 14:37:26.386536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.422108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.422146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:01.790 [2024-12-10 14:37:26.422160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.609 ms 00:32:01.790 [2024-12-10 14:37:26.422171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.456557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.456593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:01.790 [2024-12-10 14:37:26.456606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.384 ms 00:32:01.790 [2024-12-10 14:37:26.456616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.490825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.490859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:01.790 [2024-12-10 14:37:26.490888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.209 ms 00:32:01.790 [2024-12-10 14:37:26.490898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.525176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.790 [2024-12-10 14:37:26.525211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:01.790 [2024-12-10 14:37:26.525224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.253 ms 00:32:01.790 [2024-12-10 14:37:26.525234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.790 [2024-12-10 14:37:26.525288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:01.790 [2024-12-10 14:37:26.525312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:01.790 [2024-12-10 14:37:26.525329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:01.790 [2024-12-10 14:37:26.525341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:01.790 [2024-12-10 14:37:26.525853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.525999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:01.791 [2024-12-10 14:37:26.526435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:01.791 [2024-12-10 14:37:26.526446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b799b7cb-f6a0-41cf-a325-8a1affd9b1f3 00:32:01.791 [2024-12-10 14:37:26.526457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:01.791 [2024-12-10 14:37:26.526467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:01.791 [2024-12-10 14:37:26.526477] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:01.791 [2024-12-10 14:37:26.526488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:01.791 [2024-12-10 14:37:26.526511] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:01.791 [2024-12-10 14:37:26.526523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:01.791 [2024-12-10 14:37:26.526533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:01.791 [2024-12-10 14:37:26.526543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:01.791 [2024-12-10 14:37:26.526551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:01.791 [2024-12-10 14:37:26.526562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.791 [2024-12-10 14:37:26.526572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:01.791 [2024-12-10 14:37:26.526583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:32:01.791 [2024-12-10 14:37:26.526598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.546891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.791 [2024-12-10 14:37:26.546923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:01.791 [2024-12-10 14:37:26.546952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.289 ms 00:32:01.791 [2024-12-10 14:37:26.546963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.547599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.791 [2024-12-10 14:37:26.547628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:01.791 [2024-12-10 14:37:26.547640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:32:01.791 [2024-12-10 14:37:26.547650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.599840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.791 [2024-12-10 14:37:26.599877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.791 [2024-12-10 14:37:26.599906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.791 [2024-12-10 14:37:26.599918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.599979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.791 [2024-12-10 14:37:26.599996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.791 [2024-12-10 14:37:26.600007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.791 [2024-12-10 14:37:26.600018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.600082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.791 [2024-12-10 14:37:26.600096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.791 [2024-12-10 14:37:26.600107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.791 [2024-12-10 14:37:26.600118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.791 [2024-12-10 14:37:26.600136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.791 [2024-12-10 14:37:26.600146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.791 [2024-12-10 14:37:26.600162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.791 [2024-12-10 14:37:26.600173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.726122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.726203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:02.051 [2024-12-10 14:37:26.726219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.726231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.826624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.826689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:02.051 [2024-12-10 14:37:26.826705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.826716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.826818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.826832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:02.051 [2024-12-10 14:37:26.826844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.826854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.826910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.826923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:02.051 [2024-12-10 14:37:26.826935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.826948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.827098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.827113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:02.051 [2024-12-10 14:37:26.827125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.827136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.827178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.827192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:02.051 [2024-12-10 14:37:26.827204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.827215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.827264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.827275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:02.051 [2024-12-10 14:37:26.827286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.827295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.827341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.051 [2024-12-10 14:37:26.827354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:02.051 [2024-12-10 14:37:26.827366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.051 [2024-12-10 14:37:26.827381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.051 [2024-12-10 14:37:26.827518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.054 ms, result 0 00:32:03.431 00:32:03.431 00:32:03.431 14:37:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:05.337 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:05.337 Process with pid 82627 is not found 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82627 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82627 ']' 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82627 00:32:05.337 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82627) - No such process 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82627 is not found' 00:32:05.337 14:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:05.597 Remove shared memory files 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:05.597 00:32:05.597 real 3m50.119s 00:32:05.597 user 4m20.807s 00:32:05.597 sys 0m43.718s 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.597 14:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:05.597 ************************************ 00:32:05.597 END TEST ftl_dirty_shutdown 00:32:05.597 ************************************ 00:32:05.597 14:37:30 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:05.597 14:37:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.597 14:37:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.597 14:37:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:05.597 ************************************ 00:32:05.597 START TEST ftl_upgrade_shutdown 00:32:05.597 ************************************ 00:32:05.597 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:05.857 * Looking for test storage... 00:32:05.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.857 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:05.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.857 --rc genhtml_branch_coverage=1 00:32:05.857 --rc genhtml_function_coverage=1 00:32:05.857 --rc genhtml_legend=1 00:32:05.857 --rc geninfo_all_blocks=1 00:32:05.857 --rc geninfo_unexecuted_blocks=1 00:32:05.857 00:32:05.857 ' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:05.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.858 --rc genhtml_branch_coverage=1 00:32:05.858 --rc genhtml_function_coverage=1 00:32:05.858 --rc genhtml_legend=1 00:32:05.858 --rc geninfo_all_blocks=1 00:32:05.858 --rc geninfo_unexecuted_blocks=1 00:32:05.858 00:32:05.858 ' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:05.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.858 --rc genhtml_branch_coverage=1 00:32:05.858 --rc genhtml_function_coverage=1 00:32:05.858 --rc genhtml_legend=1 00:32:05.858 --rc geninfo_all_blocks=1 00:32:05.858 --rc geninfo_unexecuted_blocks=1 00:32:05.858 00:32:05.858 ' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:05.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.858 --rc genhtml_branch_coverage=1 00:32:05.858 --rc genhtml_function_coverage=1 00:32:05.858 --rc genhtml_legend=1 00:32:05.858 --rc geninfo_all_blocks=1 00:32:05.858 --rc geninfo_unexecuted_blocks=1 00:32:05.858 00:32:05.858 ' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85068 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85068 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85068 ']' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.858 14:37:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:06.118 [2024-12-10 14:37:30.787141] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:06.118 [2024-12-10 14:37:30.787260] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85068 ] 00:32:06.377 [2024-12-10 14:37:30.969818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.377 [2024-12-10 14:37:31.104089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:07.315 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:07.612 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:07.872 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:07.872 { 00:32:07.872 "name": "basen1", 00:32:07.872 "aliases": [ 00:32:07.872 "f3ad37f2-af61-41b6-b0cb-0f48c42d1fd8" 00:32:07.872 ], 00:32:07.872 "product_name": "NVMe disk", 00:32:07.872 "block_size": 4096, 00:32:07.872 "num_blocks": 1310720, 00:32:07.872 "uuid": "f3ad37f2-af61-41b6-b0cb-0f48c42d1fd8", 00:32:07.872 "numa_id": -1, 00:32:07.872 "assigned_rate_limits": { 00:32:07.872 "rw_ios_per_sec": 0, 00:32:07.872 "rw_mbytes_per_sec": 0, 00:32:07.872 "r_mbytes_per_sec": 0, 00:32:07.872 "w_mbytes_per_sec": 0 00:32:07.872 }, 00:32:07.872 "claimed": true, 00:32:07.872 "claim_type": "read_many_write_one", 00:32:07.872 "zoned": false, 00:32:07.872 "supported_io_types": { 00:32:07.872 "read": true, 00:32:07.872 "write": true, 00:32:07.872 "unmap": true, 00:32:07.872 "flush": true, 00:32:07.872 "reset": true, 00:32:07.872 "nvme_admin": true, 00:32:07.872 "nvme_io": true, 00:32:07.872 "nvme_io_md": false, 00:32:07.872 "write_zeroes": true, 00:32:07.872 "zcopy": false, 00:32:07.872 "get_zone_info": false, 00:32:07.872 "zone_management": false, 00:32:07.872 "zone_append": false, 00:32:07.872 "compare": true, 00:32:07.872 "compare_and_write": false, 00:32:07.872 "abort": true, 00:32:07.872 "seek_hole": false, 00:32:07.872 "seek_data": false, 00:32:07.872 "copy": true, 00:32:07.872 "nvme_iov_md": false 00:32:07.872 }, 00:32:07.872 "driver_specific": { 00:32:07.872 "nvme": [ 00:32:07.872 { 00:32:07.872 "pci_address": "0000:00:11.0", 00:32:07.872 "trid": { 00:32:07.872 "trtype": "PCIe", 00:32:07.872 "traddr": "0000:00:11.0" 00:32:07.872 }, 00:32:07.872 "ctrlr_data": { 00:32:07.872 "cntlid": 0, 00:32:07.872 "vendor_id": "0x1b36", 00:32:07.872 "model_number": "QEMU NVMe Ctrl", 00:32:07.872 "serial_number": "12341", 00:32:07.872 "firmware_revision": "8.0.0", 00:32:07.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:07.872 "oacs": { 00:32:07.872 "security": 0, 00:32:07.872 "format": 1, 00:32:07.872 "firmware": 0, 00:32:07.872 "ns_manage": 1 00:32:07.872 }, 00:32:07.872 "multi_ctrlr": false, 00:32:07.872 "ana_reporting": false 00:32:07.872 }, 00:32:07.872 "vs": { 00:32:07.872 "nvme_version": "1.4" 00:32:07.872 }, 00:32:07.872 "ns_data": { 00:32:07.872 "id": 1, 00:32:07.872 "can_share": false 00:32:07.872 } 00:32:07.872 } 00:32:07.872 ], 00:32:07.872 "mp_policy": "active_passive" 00:32:07.872 } 00:32:07.872 } 00:32:07.872 ]' 00:32:07.872 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:07.872 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:07.872 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.873 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:08.131 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=c40a3b09-1001-4e5f-a3c4-ee1241607ff3 00:32:08.131 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:08.131 14:37:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c40a3b09-1001-4e5f-a3c4-ee1241607ff3 00:32:08.390 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:08.648 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6af93b64-27a6-4fbb-8e5d-25ee6d4ed6d4 00:32:08.648 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6af93b64-27a6-4fbb-8e5d-25ee6d4ed6d4 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=68204dc6-33a4-41ab-9572-472094c52082 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 68204dc6-33a4-41ab-9572-472094c52082 ]] 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 68204dc6-33a4-41ab-9572-472094c52082 5120 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=68204dc6-33a4-41ab-9572-472094c52082 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 68204dc6-33a4-41ab-9572-472094c52082 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=68204dc6-33a4-41ab-9572-472094c52082 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:08.907 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 68204dc6-33a4-41ab-9572-472094c52082 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:09.167 { 00:32:09.167 "name": "68204dc6-33a4-41ab-9572-472094c52082", 00:32:09.167 "aliases": [ 00:32:09.167 "lvs/basen1p0" 00:32:09.167 ], 00:32:09.167 "product_name": "Logical Volume", 00:32:09.167 "block_size": 4096, 00:32:09.167 "num_blocks": 5242880, 00:32:09.167 "uuid": "68204dc6-33a4-41ab-9572-472094c52082", 00:32:09.167 "assigned_rate_limits": { 00:32:09.167 "rw_ios_per_sec": 0, 00:32:09.167 "rw_mbytes_per_sec": 0, 00:32:09.167 "r_mbytes_per_sec": 0, 00:32:09.167 "w_mbytes_per_sec": 0 00:32:09.167 }, 00:32:09.167 "claimed": false, 00:32:09.167 "zoned": false, 00:32:09.167 "supported_io_types": { 00:32:09.167 "read": true, 00:32:09.167 "write": true, 00:32:09.167 "unmap": true, 00:32:09.167 "flush": false, 00:32:09.167 "reset": true, 00:32:09.167 "nvme_admin": false, 00:32:09.167 "nvme_io": false, 00:32:09.167 "nvme_io_md": false, 00:32:09.167 "write_zeroes": true, 00:32:09.167 "zcopy": false, 00:32:09.167 "get_zone_info": false, 00:32:09.167 "zone_management": false, 00:32:09.167 "zone_append": false, 00:32:09.167 "compare": false, 00:32:09.167 "compare_and_write": false, 00:32:09.167 "abort": false, 00:32:09.167 "seek_hole": true, 00:32:09.167 "seek_data": true, 00:32:09.167 "copy": false, 00:32:09.167 "nvme_iov_md": false 00:32:09.167 }, 00:32:09.167 "driver_specific": { 00:32:09.167 "lvol": { 00:32:09.167 "lvol_store_uuid": "6af93b64-27a6-4fbb-8e5d-25ee6d4ed6d4", 00:32:09.167 "base_bdev": "basen1", 00:32:09.167 "thin_provision": true, 00:32:09.167 "num_allocated_clusters": 0, 00:32:09.167 "snapshot": false, 00:32:09.167 "clone": false, 00:32:09.167 "esnap_clone": false 00:32:09.167 } 00:32:09.167 } 00:32:09.167 } 00:32:09.167 ]' 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:09.167 14:37:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:09.426 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:09.426 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:09.426 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:09.686 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:09.686 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:09.686 14:37:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 68204dc6-33a4-41ab-9572-472094c52082 -c cachen1p0 --l2p_dram_limit 2 00:32:09.686 [2024-12-10 14:37:34.495756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.495815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:09.686 [2024-12-10 14:37:34.495838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:09.686 [2024-12-10 14:37:34.495850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.495933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.495947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:09.686 [2024-12-10 14:37:34.495961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:32:09.686 [2024-12-10 14:37:34.495973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.496002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:09.686 [2024-12-10 14:37:34.497102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:09.686 [2024-12-10 14:37:34.497151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.497163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:09.686 [2024-12-10 14:37:34.497180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.156 ms 00:32:09.686 [2024-12-10 14:37:34.497191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.497280] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 12f70387-fa62-43ad-bf58-6b3870524c55 00:32:09.686 [2024-12-10 14:37:34.499728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.499770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:09.686 [2024-12-10 14:37:34.499784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:09.686 [2024-12-10 14:37:34.499799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.514266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.514314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:09.686 [2024-12-10 14:37:34.514330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.426 ms 00:32:09.686 [2024-12-10 14:37:34.514344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.514403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.514420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:09.686 [2024-12-10 14:37:34.514432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:09.686 [2024-12-10 14:37:34.514450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.514514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.686 [2024-12-10 14:37:34.514531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:09.686 [2024-12-10 14:37:34.514546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:09.686 [2024-12-10 14:37:34.514560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.686 [2024-12-10 14:37:34.514587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:09.946 [2024-12-10 14:37:34.520871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.946 [2024-12-10 14:37:34.520923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:09.946 [2024-12-10 14:37:34.520943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.298 ms 00:32:09.946 [2024-12-10 14:37:34.520954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.946 [2024-12-10 14:37:34.520991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.946 [2024-12-10 14:37:34.521003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:09.946 [2024-12-10 14:37:34.521017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:09.946 [2024-12-10 14:37:34.521028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.946 [2024-12-10 14:37:34.521068] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:09.946 [2024-12-10 14:37:34.521214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:09.946 [2024-12-10 14:37:34.521237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:09.946 [2024-12-10 14:37:34.521252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:09.946 [2024-12-10 14:37:34.521269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:09.946 [2024-12-10 14:37:34.521282] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:09.946 [2024-12-10 14:37:34.521297] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:09.946 [2024-12-10 14:37:34.521307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:09.946 [2024-12-10 14:37:34.521326] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:09.946 [2024-12-10 14:37:34.521336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:09.946 [2024-12-10 14:37:34.521350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.946 [2024-12-10 14:37:34.521360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:09.946 [2024-12-10 14:37:34.521374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:32:09.946 [2024-12-10 14:37:34.521384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.946 [2024-12-10 14:37:34.521464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.946 [2024-12-10 14:37:34.521487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:09.946 [2024-12-10 14:37:34.521501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:32:09.946 [2024-12-10 14:37:34.521520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.946 [2024-12-10 14:37:34.521623] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:09.946 [2024-12-10 14:37:34.521642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:09.946 [2024-12-10 14:37:34.521656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:09.946 [2024-12-10 14:37:34.521667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:09.946 [2024-12-10 14:37:34.521702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:09.946 [2024-12-10 14:37:34.521724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:09.946 [2024-12-10 14:37:34.521737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:09.946 [2024-12-10 14:37:34.521746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:09.946 [2024-12-10 14:37:34.521770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:09.946 [2024-12-10 14:37:34.521783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:09.946 [2024-12-10 14:37:34.521805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:09.946 [2024-12-10 14:37:34.521814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:09.946 [2024-12-10 14:37:34.521839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:09.946 [2024-12-10 14:37:34.521852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.946 [2024-12-10 14:37:34.521861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:09.946 [2024-12-10 14:37:34.521873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:09.946 [2024-12-10 14:37:34.521883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:09.946 [2024-12-10 14:37:34.521895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:09.946 [2024-12-10 14:37:34.521905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:09.946 [2024-12-10 14:37:34.521917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:09.946 [2024-12-10 14:37:34.521926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:09.947 [2024-12-10 14:37:34.521938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:09.947 [2024-12-10 14:37:34.521947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:09.947 [2024-12-10 14:37:34.521959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:09.947 [2024-12-10 14:37:34.521968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:09.947 [2024-12-10 14:37:34.521980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:09.947 [2024-12-10 14:37:34.521989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:09.947 [2024-12-10 14:37:34.522003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:09.947 [2024-12-10 14:37:34.522013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:09.947 [2024-12-10 14:37:34.522034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:09.947 [2024-12-10 14:37:34.522047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:09.947 [2024-12-10 14:37:34.522068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:09.947 [2024-12-10 14:37:34.522099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:09.947 [2024-12-10 14:37:34.522111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:09.947 [2024-12-10 14:37:34.522137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:09.947 [2024-12-10 14:37:34.522147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:09.947 [2024-12-10 14:37:34.522160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:09.947 [2024-12-10 14:37:34.522171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:09.947 [2024-12-10 14:37:34.522187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:09.947 [2024-12-10 14:37:34.522196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:09.947 [2024-12-10 14:37:34.522209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:09.947 [2024-12-10 14:37:34.522219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:09.947 [2024-12-10 14:37:34.522232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:09.947 [2024-12-10 14:37:34.522243] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:09.947 [2024-12-10 14:37:34.522263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:09.947 [2024-12-10 14:37:34.522289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:09.947 [2024-12-10 14:37:34.522324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:09.947 [2024-12-10 14:37:34.522338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:09.947 [2024-12-10 14:37:34.522349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:09.947 [2024-12-10 14:37:34.522364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:09.947 [2024-12-10 14:37:34.522452] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:09.947 [2024-12-10 14:37:34.522467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:09.947 [2024-12-10 14:37:34.522491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:09.947 [2024-12-10 14:37:34.522501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:09.947 [2024-12-10 14:37:34.522515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:09.947 [2024-12-10 14:37:34.522526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.947 [2024-12-10 14:37:34.522541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:09.947 [2024-12-10 14:37:34.522552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.964 ms 00:32:09.947 [2024-12-10 14:37:34.522565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.947 [2024-12-10 14:37:34.522609] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:09.947 [2024-12-10 14:37:34.522630] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:13.237 [2024-12-10 14:37:38.060252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.237 [2024-12-10 14:37:38.060355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:13.237 [2024-12-10 14:37:38.060377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3543.380 ms 00:32:13.237 [2024-12-10 14:37:38.060392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.496 [2024-12-10 14:37:38.107183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.496 [2024-12-10 14:37:38.107248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:13.496 [2024-12-10 14:37:38.107269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.487 ms 00:32:13.496 [2024-12-10 14:37:38.107284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.496 [2024-12-10 14:37:38.107382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.496 [2024-12-10 14:37:38.107400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:13.496 [2024-12-10 14:37:38.107413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:13.496 [2024-12-10 14:37:38.107435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.496 [2024-12-10 14:37:38.158771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.496 [2024-12-10 14:37:38.158848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:13.496 [2024-12-10 14:37:38.158865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.378 ms 00:32:13.497 [2024-12-10 14:37:38.158882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.158951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.158970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:13.497 [2024-12-10 14:37:38.158982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:13.497 [2024-12-10 14:37:38.158996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.159845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.159867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:13.497 [2024-12-10 14:37:38.159890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.767 ms 00:32:13.497 [2024-12-10 14:37:38.159905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.159950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.159965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:13.497 [2024-12-10 14:37:38.159979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:13.497 [2024-12-10 14:37:38.159996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.184500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.184547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:13.497 [2024-12-10 14:37:38.184562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.521 ms 00:32:13.497 [2024-12-10 14:37:38.184575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.215233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:13.497 [2024-12-10 14:37:38.217036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.217069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:13.497 [2024-12-10 14:37:38.217090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.417 ms 00:32:13.497 [2024-12-10 14:37:38.217104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.249410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.249451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:13.497 [2024-12-10 14:37:38.249470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.310 ms 00:32:13.497 [2024-12-10 14:37:38.249481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.249614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.249632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:13.497 [2024-12-10 14:37:38.249652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:32:13.497 [2024-12-10 14:37:38.249663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.284273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.284418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:13.497 [2024-12-10 14:37:38.284463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.592 ms 00:32:13.497 [2024-12-10 14:37:38.284475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.320368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.320402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:13.497 [2024-12-10 14:37:38.320420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.841 ms 00:32:13.497 [2024-12-10 14:37:38.320446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.497 [2024-12-10 14:37:38.321214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.497 [2024-12-10 14:37:38.321244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:13.497 [2024-12-10 14:37:38.321261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:32:13.497 [2024-12-10 14:37:38.321276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.756 [2024-12-10 14:37:38.420508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.756 [2024-12-10 14:37:38.420559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:13.756 [2024-12-10 14:37:38.420585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.333 ms 00:32:13.756 [2024-12-10 14:37:38.420598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.756 [2024-12-10 14:37:38.458056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.756 [2024-12-10 14:37:38.458101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:13.756 [2024-12-10 14:37:38.458121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.400 ms 00:32:13.756 [2024-12-10 14:37:38.458133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.757 [2024-12-10 14:37:38.493824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.757 [2024-12-10 14:37:38.493865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:13.757 [2024-12-10 14:37:38.493884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.699 ms 00:32:13.757 [2024-12-10 14:37:38.493894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.757 [2024-12-10 14:37:38.530130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.757 [2024-12-10 14:37:38.530170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:13.757 [2024-12-10 14:37:38.530188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.247 ms 00:32:13.757 [2024-12-10 14:37:38.530199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.757 [2024-12-10 14:37:38.530254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.757 [2024-12-10 14:37:38.530267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:13.757 [2024-12-10 14:37:38.530286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:13.757 [2024-12-10 14:37:38.530296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.757 [2024-12-10 14:37:38.530432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:13.757 [2024-12-10 14:37:38.530453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:13.757 [2024-12-10 14:37:38.530468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:32:13.757 [2024-12-10 14:37:38.530479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:13.757 [2024-12-10 14:37:38.531958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4042.264 ms, result 0 00:32:13.757 { 00:32:13.757 "name": "ftl", 00:32:13.757 "uuid": "12f70387-fa62-43ad-bf58-6b3870524c55" 00:32:13.757 } 00:32:13.757 14:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:14.016 [2024-12-10 14:37:38.742333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.016 14:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:14.275 14:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:14.534 [2024-12-10 14:37:39.154127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:14.534 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:14.534 [2024-12-10 14:37:39.336963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:14.534 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:15.102 Fill FTL, iteration 1 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85200 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85200 /var/tmp/spdk.tgt.sock 00:32:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85200 ']' 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:15.102 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.103 14:37:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.103 [2024-12-10 14:37:39.802484] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:15.103 [2024-12-10 14:37:39.802614] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85200 ] 00:32:15.362 [2024-12-10 14:37:39.988459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.362 [2024-12-10 14:37:40.126995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:16.739 ftln1 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85200 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85200 ']' 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85200 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.739 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85200 00:32:16.997 killing process with pid 85200 00:32:16.997 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:16.997 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:16.997 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85200' 00:32:16.997 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85200 00:32:16.997 14:37:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85200 00:32:19.534 14:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:19.534 14:37:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:19.534 [2024-12-10 14:37:44.174059] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:19.534 [2024-12-10 14:37:44.174187] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85254 ] 00:32:19.534 [2024-12-10 14:37:44.353236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.793 [2024-12-10 14:37:44.482438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.171  [2024-12-10T14:37:47.383Z] Copying: 266/1024 [MB] (266 MBps) [2024-12-10T14:37:48.319Z] Copying: 532/1024 [MB] (266 MBps) [2024-12-10T14:37:48.885Z] Copying: 796/1024 [MB] (264 MBps) [2024-12-10T14:37:50.262Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:32:25.428 00:32:25.428 Calculate MD5 checksum, iteration 1 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:25.428 14:37:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:25.428 [2024-12-10 14:37:50.194585] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:25.428 [2024-12-10 14:37:50.194942] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85318 ] 00:32:25.687 [2024-12-10 14:37:50.377220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.687 [2024-12-10 14:37:50.506122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.591  [2024-12-10T14:37:52.994Z] Copying: 590/1024 [MB] (590 MBps) [2024-12-10T14:37:53.931Z] Copying: 1024/1024 [MB] (average 588 MBps) 00:32:29.097 00:32:29.097 14:37:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:29.097 14:37:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=89ff394af97246f0717b47400e05e03f 00:32:31.002 Fill FTL, iteration 2 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:31.002 14:37:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:31.002 [2024-12-10 14:37:55.558556] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:31.002 [2024-12-10 14:37:55.558874] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85374 ] 00:32:31.002 [2024-12-10 14:37:55.743369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.261 [2024-12-10 14:37:55.872272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.639  [2024-12-10T14:37:58.411Z] Copying: 263/1024 [MB] (263 MBps) [2024-12-10T14:37:59.843Z] Copying: 527/1024 [MB] (264 MBps) [2024-12-10T14:38:00.427Z] Copying: 792/1024 [MB] (265 MBps) [2024-12-10T14:38:01.805Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:32:36.971 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:36.971 Calculate MD5 checksum, iteration 2 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:36.971 14:38:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:36.971 [2024-12-10 14:38:01.650089] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:36.971 [2024-12-10 14:38:01.651107] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85438 ] 00:32:37.230 [2024-12-10 14:38:01.834860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.230 [2024-12-10 14:38:01.968594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.136  [2024-12-10T14:38:04.537Z] Copying: 622/1024 [MB] (622 MBps) [2024-12-10T14:38:05.915Z] Copying: 1024/1024 [MB] (average 626 MBps) 00:32:41.081 00:32:41.081 14:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:41.081 14:38:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:42.980 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:42.980 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3ff2a25b39d13d62ef954cc7c31c8a1a 00:32:42.981 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:42.981 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:42.981 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:42.981 [2024-12-10 14:38:07.493489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.981 [2024-12-10 14:38:07.493552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:42.981 [2024-12-10 14:38:07.493569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:42.981 [2024-12-10 14:38:07.493595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.981 [2024-12-10 14:38:07.493620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.981 [2024-12-10 14:38:07.493635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:42.981 [2024-12-10 14:38:07.493645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:42.981 [2024-12-10 14:38:07.493655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.981 [2024-12-10 14:38:07.493675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:42.981 [2024-12-10 14:38:07.493714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:42.981 [2024-12-10 14:38:07.493726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:42.981 [2024-12-10 14:38:07.493736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:42.981 [2024-12-10 14:38:07.493801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.295 ms, result 0 00:32:42.981 true 00:32:42.981 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:42.981 { 00:32:42.981 "name": "ftl", 00:32:42.981 "properties": [ 00:32:42.981 { 00:32:42.981 "name": "superblock_version", 00:32:42.981 "value": 5, 00:32:42.981 "read-only": true 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "name": "base_device", 00:32:42.981 "bands": [ 00:32:42.981 { 00:32:42.981 "id": 0, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 1, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 2, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 3, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 4, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 5, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 6, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 7, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 8, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 9, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 10, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 11, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 12, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 13, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 14, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 15, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 16, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 17, 00:32:42.981 "state": "FREE", 00:32:42.981 "validity": 0.0 00:32:42.981 } 00:32:42.981 ], 00:32:42.981 "read-only": true 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "name": "cache_device", 00:32:42.981 "type": "bdev", 00:32:42.981 "chunks": [ 00:32:42.981 { 00:32:42.981 "id": 0, 00:32:42.981 "state": "INACTIVE", 00:32:42.981 "utilization": 0.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 1, 00:32:42.981 "state": "CLOSED", 00:32:42.981 "utilization": 1.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 2, 00:32:42.981 "state": "CLOSED", 00:32:42.981 "utilization": 1.0 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 3, 00:32:42.981 "state": "OPEN", 00:32:42.981 "utilization": 0.001953125 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "id": 4, 00:32:42.981 "state": "OPEN", 00:32:42.981 "utilization": 0.0 00:32:42.981 } 00:32:42.981 ], 00:32:42.981 "read-only": true 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "name": "verbose_mode", 00:32:42.981 "value": true, 00:32:42.981 "unit": "", 00:32:42.981 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:42.981 }, 00:32:42.981 { 00:32:42.981 "name": "prep_upgrade_on_shutdown", 00:32:42.981 "value": false, 00:32:42.981 "unit": "", 00:32:42.981 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:42.981 } 00:32:42.981 ] 00:32:42.981 } 00:32:42.981 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:43.241 [2024-12-10 14:38:07.913155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.241 [2024-12-10 14:38:07.913322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:43.241 [2024-12-10 14:38:07.913473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:43.241 [2024-12-10 14:38:07.913489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.241 [2024-12-10 14:38:07.913521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.241 [2024-12-10 14:38:07.913541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:43.241 [2024-12-10 14:38:07.913551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:43.241 [2024-12-10 14:38:07.913577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.241 [2024-12-10 14:38:07.913597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.241 [2024-12-10 14:38:07.913607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:43.241 [2024-12-10 14:38:07.913617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:43.241 [2024-12-10 14:38:07.913626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.241 [2024-12-10 14:38:07.913682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.508 ms, result 0 00:32:43.241 true 00:32:43.241 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:43.241 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:43.241 14:38:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:43.500 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:43.500 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:43.500 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:43.500 [2024-12-10 14:38:08.300852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.500 [2024-12-10 14:38:08.300888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:43.500 [2024-12-10 14:38:08.300900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:43.500 [2024-12-10 14:38:08.300910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.500 [2024-12-10 14:38:08.300930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.500 [2024-12-10 14:38:08.300941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:43.500 [2024-12-10 14:38:08.300950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:43.500 [2024-12-10 14:38:08.300958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.500 [2024-12-10 14:38:08.300976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.500 [2024-12-10 14:38:08.300985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:43.500 [2024-12-10 14:38:08.300994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:43.500 [2024-12-10 14:38:08.301002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.500 [2024-12-10 14:38:08.301048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:32:43.500 true 00:32:43.500 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:43.759 { 00:32:43.759 "name": "ftl", 00:32:43.759 "properties": [ 00:32:43.759 { 00:32:43.759 "name": "superblock_version", 00:32:43.759 "value": 5, 00:32:43.759 "read-only": true 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "name": "base_device", 00:32:43.759 "bands": [ 00:32:43.759 { 00:32:43.759 "id": 0, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 1, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 2, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 3, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 4, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 5, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 6, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.759 }, 00:32:43.759 { 00:32:43.759 "id": 7, 00:32:43.759 "state": "FREE", 00:32:43.759 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 8, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 9, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 10, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 11, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 12, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 13, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 14, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 15, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 16, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 17, 00:32:43.760 "state": "FREE", 00:32:43.760 "validity": 0.0 00:32:43.760 } 00:32:43.760 ], 00:32:43.760 "read-only": true 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "name": "cache_device", 00:32:43.760 "type": "bdev", 00:32:43.760 "chunks": [ 00:32:43.760 { 00:32:43.760 "id": 0, 00:32:43.760 "state": "INACTIVE", 00:32:43.760 "utilization": 0.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 1, 00:32:43.760 "state": "CLOSED", 00:32:43.760 "utilization": 1.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 2, 00:32:43.760 "state": "CLOSED", 00:32:43.760 "utilization": 1.0 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 3, 00:32:43.760 "state": "OPEN", 00:32:43.760 "utilization": 0.001953125 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "id": 4, 00:32:43.760 "state": "OPEN", 00:32:43.760 "utilization": 0.0 00:32:43.760 } 00:32:43.760 ], 00:32:43.760 "read-only": true 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "name": "verbose_mode", 00:32:43.760 "value": true, 00:32:43.760 "unit": "", 00:32:43.760 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:43.760 }, 00:32:43.760 { 00:32:43.760 "name": "prep_upgrade_on_shutdown", 00:32:43.760 "value": true, 00:32:43.760 "unit": "", 00:32:43.760 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:43.760 } 00:32:43.760 ] 00:32:43.760 } 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85068 ]] 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85068 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85068 ']' 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85068 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85068 00:32:43.760 killing process with pid 85068 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85068' 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85068 00:32:43.760 14:38:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85068 00:32:45.138 [2024-12-10 14:38:09.629594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:45.138 [2024-12-10 14:38:09.649156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:45.138 [2024-12-10 14:38:09.649197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:45.138 [2024-12-10 14:38:09.649213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:45.138 [2024-12-10 14:38:09.649224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:45.138 [2024-12-10 14:38:09.649245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:45.138 [2024-12-10 14:38:09.653092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:45.138 [2024-12-10 14:38:09.653121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:45.138 [2024-12-10 14:38:09.653132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.838 ms 00:32:45.138 [2024-12-10 14:38:09.653148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.701653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.701708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:53.272 [2024-12-10 14:38:16.701730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7059.921 ms 00:32:53.272 [2024-12-10 14:38:16.701741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.702825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.702853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:53.272 [2024-12-10 14:38:16.702864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:32:53.272 [2024-12-10 14:38:16.702874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.703784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.703803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:53.272 [2024-12-10 14:38:16.703818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.881 ms 00:32:53.272 [2024-12-10 14:38:16.703845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.717995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.718032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:53.272 [2024-12-10 14:38:16.718044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.141 ms 00:32:53.272 [2024-12-10 14:38:16.718054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.727244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.727282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:53.272 [2024-12-10 14:38:16.727296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.170 ms 00:32:53.272 [2024-12-10 14:38:16.727306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.727382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.727399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:53.272 [2024-12-10 14:38:16.727411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:53.272 [2024-12-10 14:38:16.727421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.741348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.741521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:53.272 [2024-12-10 14:38:16.741550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.933 ms 00:32:53.272 [2024-12-10 14:38:16.741560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.755475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.755509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:53.272 [2024-12-10 14:38:16.755521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.859 ms 00:32:53.272 [2024-12-10 14:38:16.755531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.769008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.769041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:53.272 [2024-12-10 14:38:16.769054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.465 ms 00:32:53.272 [2024-12-10 14:38:16.769063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.783043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.783186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:53.272 [2024-12-10 14:38:16.783221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.930 ms 00:32:53.272 [2024-12-10 14:38:16.783232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.272 [2024-12-10 14:38:16.783309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:53.272 [2024-12-10 14:38:16.783339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:53.272 [2024-12-10 14:38:16.783352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:53.272 [2024-12-10 14:38:16.783363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:53.272 [2024-12-10 14:38:16.783375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:53.272 [2024-12-10 14:38:16.783535] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:53.272 [2024-12-10 14:38:16.783546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 12f70387-fa62-43ad-bf58-6b3870524c55 00:32:53.272 [2024-12-10 14:38:16.783557] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:53.272 [2024-12-10 14:38:16.783566] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:53.272 [2024-12-10 14:38:16.783576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:53.272 [2024-12-10 14:38:16.783587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:53.272 [2024-12-10 14:38:16.783601] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:53.272 [2024-12-10 14:38:16.783612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:53.272 [2024-12-10 14:38:16.783626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:53.272 [2024-12-10 14:38:16.783635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:53.272 [2024-12-10 14:38:16.783644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:53.272 [2024-12-10 14:38:16.783655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.272 [2024-12-10 14:38:16.783666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:53.272 [2024-12-10 14:38:16.783692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:32:53.272 [2024-12-10 14:38:16.783718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.802271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.273 [2024-12-10 14:38:16.802435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:53.273 [2024-12-10 14:38:16.802462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.551 ms 00:32:53.273 [2024-12-10 14:38:16.802473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.803020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.273 [2024-12-10 14:38:16.803034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:53.273 [2024-12-10 14:38:16.803046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:32:53.273 [2024-12-10 14:38:16.803056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.864149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:16.864187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:53.273 [2024-12-10 14:38:16.864199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:16.864215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.864244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:16.864255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:53.273 [2024-12-10 14:38:16.864264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:16.864274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.864353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:16.864367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:53.273 [2024-12-10 14:38:16.864382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:16.864392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.864409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:16.864419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:53.273 [2024-12-10 14:38:16.864429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:16.864439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:16.979334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:16.979383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:53.273 [2024-12-10 14:38:16.979404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:16.979414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.073665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.073724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:53.273 [2024-12-10 14:38:17.073737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.073748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.073844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.073857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:53.273 [2024-12-10 14:38:17.073868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.073882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.073925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.073937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:53.273 [2024-12-10 14:38:17.073946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.073956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.074056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.074069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:53.273 [2024-12-10 14:38:17.074080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.074089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.074128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.074140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:53.273 [2024-12-10 14:38:17.074165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.074175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.074214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.074225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:53.273 [2024-12-10 14:38:17.074235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.074245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.074290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:53.273 [2024-12-10 14:38:17.074302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:53.273 [2024-12-10 14:38:17.074313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:53.273 [2024-12-10 14:38:17.074323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.273 [2024-12-10 14:38:17.074442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7437.312 ms, result 0 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85648 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85648 00:32:56.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85648 ']' 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.613 14:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:56.613 [2024-12-10 14:38:21.210855] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:32:56.613 [2024-12-10 14:38:21.210986] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85648 ] 00:32:56.613 [2024-12-10 14:38:21.397826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.872 [2024-12-10 14:38:21.503705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.810 [2024-12-10 14:38:22.417323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.810 [2024-12-10 14:38:22.417397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:57.810 [2024-12-10 14:38:22.563054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.563233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:57.810 [2024-12-10 14:38:22.563259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:57.810 [2024-12-10 14:38:22.563270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.563339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.563351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:57.810 [2024-12-10 14:38:22.563362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:32:57.810 [2024-12-10 14:38:22.563372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.563402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:57.810 [2024-12-10 14:38:22.564343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:57.810 [2024-12-10 14:38:22.564365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.564375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:57.810 [2024-12-10 14:38:22.564386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.976 ms 00:32:57.810 [2024-12-10 14:38:22.564396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.565828] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:57.810 [2024-12-10 14:38:22.584074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.584110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:57.810 [2024-12-10 14:38:22.584129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.277 ms 00:32:57.810 [2024-12-10 14:38:22.584139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.584199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.584210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:57.810 [2024-12-10 14:38:22.584221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:57.810 [2024-12-10 14:38:22.584231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.590860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.590892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:57.810 [2024-12-10 14:38:22.590903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.566 ms 00:32:57.810 [2024-12-10 14:38:22.590913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.590971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.590984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:57.810 [2024-12-10 14:38:22.590995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:32:57.810 [2024-12-10 14:38:22.591004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.591043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.591058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:57.810 [2024-12-10 14:38:22.591067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:57.810 [2024-12-10 14:38:22.591076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.591101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:57.810 [2024-12-10 14:38:22.595606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.595639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:57.810 [2024-12-10 14:38:22.595651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.518 ms 00:32:57.810 [2024-12-10 14:38:22.595679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.595721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.595732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:57.810 [2024-12-10 14:38:22.595743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:57.810 [2024-12-10 14:38:22.595753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.595808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:57.810 [2024-12-10 14:38:22.595837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:57.810 [2024-12-10 14:38:22.595872] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:57.810 [2024-12-10 14:38:22.595889] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:57.810 [2024-12-10 14:38:22.595975] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:57.810 [2024-12-10 14:38:22.595989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:57.810 [2024-12-10 14:38:22.596002] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:57.810 [2024-12-10 14:38:22.596015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:57.810 [2024-12-10 14:38:22.596027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:57.810 [2024-12-10 14:38:22.596043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:57.810 [2024-12-10 14:38:22.596052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:57.810 [2024-12-10 14:38:22.596062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:57.810 [2024-12-10 14:38:22.596071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:57.810 [2024-12-10 14:38:22.596082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.596091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:57.810 [2024-12-10 14:38:22.596101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:32:57.810 [2024-12-10 14:38:22.596111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.596184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.810 [2024-12-10 14:38:22.596194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:57.810 [2024-12-10 14:38:22.596207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:32:57.810 [2024-12-10 14:38:22.596217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.810 [2024-12-10 14:38:22.596313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:57.810 [2024-12-10 14:38:22.596326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:57.810 [2024-12-10 14:38:22.596336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:57.811 [2024-12-10 14:38:22.596364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:57.811 [2024-12-10 14:38:22.596382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:57.811 [2024-12-10 14:38:22.596391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:57.811 [2024-12-10 14:38:22.596401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:57.811 [2024-12-10 14:38:22.596418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:57.811 [2024-12-10 14:38:22.596427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:57.811 [2024-12-10 14:38:22.596444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:57.811 [2024-12-10 14:38:22.596453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:57.811 [2024-12-10 14:38:22.596469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:57.811 [2024-12-10 14:38:22.596478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:57.811 [2024-12-10 14:38:22.596495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:57.811 [2024-12-10 14:38:22.596531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:57.811 [2024-12-10 14:38:22.596557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:57.811 [2024-12-10 14:38:22.596583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:57.811 [2024-12-10 14:38:22.596610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:57.811 [2024-12-10 14:38:22.596636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:57.811 [2024-12-10 14:38:22.596662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:57.811 [2024-12-10 14:38:22.596705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:57.811 [2024-12-10 14:38:22.596713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:57.811 [2024-12-10 14:38:22.596731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:57.811 [2024-12-10 14:38:22.596740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:57.811 [2024-12-10 14:38:22.596763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:57.811 [2024-12-10 14:38:22.596772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:57.811 [2024-12-10 14:38:22.596780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:57.811 [2024-12-10 14:38:22.596789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:57.811 [2024-12-10 14:38:22.596798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:57.811 [2024-12-10 14:38:22.596807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:57.811 [2024-12-10 14:38:22.596816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:57.811 [2024-12-10 14:38:22.596827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:57.811 [2024-12-10 14:38:22.596848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:57.811 [2024-12-10 14:38:22.596876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:57.811 [2024-12-10 14:38:22.596885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:57.811 [2024-12-10 14:38:22.596894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:57.811 [2024-12-10 14:38:22.596903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.596986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:57.811 [2024-12-10 14:38:22.596996] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:57.811 [2024-12-10 14:38:22.597006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.597016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:57.811 [2024-12-10 14:38:22.597026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:57.811 [2024-12-10 14:38:22.597037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:57.811 [2024-12-10 14:38:22.597047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:57.811 [2024-12-10 14:38:22.597057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.811 [2024-12-10 14:38:22.597067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:57.811 [2024-12-10 14:38:22.597076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.808 ms 00:32:57.811 [2024-12-10 14:38:22.597085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.811 [2024-12-10 14:38:22.597127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:57.811 [2024-12-10 14:38:22.597139] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:02.007 [2024-12-10 14:38:26.114155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.114389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:02.007 [2024-12-10 14:38:26.114415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3522.736 ms 00:33:02.007 [2024-12-10 14:38:26.114426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.152571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.152617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:02.007 [2024-12-10 14:38:26.152632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.885 ms 00:33:02.007 [2024-12-10 14:38:26.152643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.152745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.152765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:02.007 [2024-12-10 14:38:26.152776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:02.007 [2024-12-10 14:38:26.152786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.197410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.197453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:02.007 [2024-12-10 14:38:26.197470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.590 ms 00:33:02.007 [2024-12-10 14:38:26.197497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.197542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.197553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:02.007 [2024-12-10 14:38:26.197564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:02.007 [2024-12-10 14:38:26.197574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.198084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.198108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:02.007 [2024-12-10 14:38:26.198119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:33:02.007 [2024-12-10 14:38:26.198129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.198171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.198183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:02.007 [2024-12-10 14:38:26.198193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:02.007 [2024-12-10 14:38:26.198203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.218213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.218249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:02.007 [2024-12-10 14:38:26.218262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.020 ms 00:33:02.007 [2024-12-10 14:38:26.218271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.248959] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:02.007 [2024-12-10 14:38:26.249017] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:02.007 [2024-12-10 14:38:26.249033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.249043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:02.007 [2024-12-10 14:38:26.249054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.707 ms 00:33:02.007 [2024-12-10 14:38:26.249063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.268680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.268846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:02.007 [2024-12-10 14:38:26.268990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.605 ms 00:33:02.007 [2024-12-10 14:38:26.269008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.285549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.285587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:02.007 [2024-12-10 14:38:26.285599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.525 ms 00:33:02.007 [2024-12-10 14:38:26.285609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.302629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.302692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:02.007 [2024-12-10 14:38:26.302722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.008 ms 00:33:02.007 [2024-12-10 14:38:26.302731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.303413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.303448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:02.007 [2024-12-10 14:38:26.303460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:33:02.007 [2024-12-10 14:38:26.303469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.387939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.387986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:02.007 [2024-12-10 14:38:26.388001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.583 ms 00:33:02.007 [2024-12-10 14:38:26.388012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.398047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:02.007 [2024-12-10 14:38:26.398702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.398724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:02.007 [2024-12-10 14:38:26.398736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.645 ms 00:33:02.007 [2024-12-10 14:38:26.398746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.398828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.398844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:02.007 [2024-12-10 14:38:26.398855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:02.007 [2024-12-10 14:38:26.398865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.007 [2024-12-10 14:38:26.398925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.007 [2024-12-10 14:38:26.398938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:02.007 [2024-12-10 14:38:26.398948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:02.008 [2024-12-10 14:38:26.398958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.008 [2024-12-10 14:38:26.398979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.008 [2024-12-10 14:38:26.398989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:02.008 [2024-12-10 14:38:26.399003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:02.008 [2024-12-10 14:38:26.399013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.008 [2024-12-10 14:38:26.399048] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:02.008 [2024-12-10 14:38:26.399060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.008 [2024-12-10 14:38:26.399070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:02.008 [2024-12-10 14:38:26.399080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:02.008 [2024-12-10 14:38:26.399089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.008 [2024-12-10 14:38:26.432490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.008 [2024-12-10 14:38:26.432535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:02.008 [2024-12-10 14:38:26.432548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.435 ms 00:33:02.008 [2024-12-10 14:38:26.432557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.008 [2024-12-10 14:38:26.432625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.008 [2024-12-10 14:38:26.432637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:02.008 [2024-12-10 14:38:26.432647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:02.008 [2024-12-10 14:38:26.432656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.008 [2024-12-10 14:38:26.433717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3876.484 ms, result 0 00:33:02.008 [2024-12-10 14:38:26.448831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.008 [2024-12-10 14:38:26.464818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:02.008 [2024-12-10 14:38:26.473479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:02.576 14:38:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.576 14:38:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:02.576 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:02.576 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:02.576 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:02.576 [2024-12-10 14:38:27.380794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.576 [2024-12-10 14:38:27.380962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:02.576 [2024-12-10 14:38:27.380989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:02.576 [2024-12-10 14:38:27.381000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.576 [2024-12-10 14:38:27.381032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.576 [2024-12-10 14:38:27.381043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:02.576 [2024-12-10 14:38:27.381054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:02.576 [2024-12-10 14:38:27.381063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.576 [2024-12-10 14:38:27.381083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.576 [2024-12-10 14:38:27.381095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:02.576 [2024-12-10 14:38:27.381105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:02.576 [2024-12-10 14:38:27.381115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.576 [2024-12-10 14:38:27.381177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.364 ms, result 0 00:33:02.576 true 00:33:02.835 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:02.835 { 00:33:02.835 "name": "ftl", 00:33:02.835 "properties": [ 00:33:02.835 { 00:33:02.835 "name": "superblock_version", 00:33:02.835 "value": 5, 00:33:02.835 "read-only": true 00:33:02.835 }, 00:33:02.835 { 00:33:02.836 "name": "base_device", 00:33:02.836 "bands": [ 00:33:02.836 { 00:33:02.836 "id": 0, 00:33:02.836 "state": "CLOSED", 00:33:02.836 "validity": 1.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 1, 00:33:02.836 "state": "CLOSED", 00:33:02.836 "validity": 1.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 2, 00:33:02.836 "state": "CLOSED", 00:33:02.836 "validity": 0.007843137254901933 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 3, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 4, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 5, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 6, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 7, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 8, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 9, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 10, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 11, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 12, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 13, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 14, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 15, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 16, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 17, 00:33:02.836 "state": "FREE", 00:33:02.836 "validity": 0.0 00:33:02.836 } 00:33:02.836 ], 00:33:02.836 "read-only": true 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "name": "cache_device", 00:33:02.836 "type": "bdev", 00:33:02.836 "chunks": [ 00:33:02.836 { 00:33:02.836 "id": 0, 00:33:02.836 "state": "INACTIVE", 00:33:02.836 "utilization": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 1, 00:33:02.836 "state": "OPEN", 00:33:02.836 "utilization": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 2, 00:33:02.836 "state": "OPEN", 00:33:02.836 "utilization": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 3, 00:33:02.836 "state": "FREE", 00:33:02.836 "utilization": 0.0 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "id": 4, 00:33:02.836 "state": "FREE", 00:33:02.836 "utilization": 0.0 00:33:02.836 } 00:33:02.836 ], 00:33:02.836 "read-only": true 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "name": "verbose_mode", 00:33:02.836 "value": true, 00:33:02.836 "unit": "", 00:33:02.836 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:02.836 }, 00:33:02.836 { 00:33:02.836 "name": "prep_upgrade_on_shutdown", 00:33:02.836 "value": false, 00:33:02.836 "unit": "", 00:33:02.836 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:02.836 } 00:33:02.836 ] 00:33:02.836 } 00:33:02.836 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:02.836 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:02.836 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:03.095 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:03.095 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:03.095 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:03.095 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:03.095 14:38:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:03.354 Validate MD5 checksum, iteration 1 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:03.354 14:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:03.354 [2024-12-10 14:38:28.136645] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:03.354 [2024-12-10 14:38:28.136958] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85738 ] 00:33:03.616 [2024-12-10 14:38:28.322907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.616 [2024-12-10 14:38:28.447638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.530  [2024-12-10T14:38:30.930Z] Copying: 625/1024 [MB] (625 MBps) [2024-12-10T14:38:32.835Z] Copying: 1024/1024 [MB] (average 626 MBps) 00:33:08.001 00:33:08.001 14:38:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:08.001 14:38:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=89ff394af97246f0717b47400e05e03f 00:33:09.379 Validate MD5 checksum, iteration 2 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 89ff394af97246f0717b47400e05e03f != \8\9\f\f\3\9\4\a\f\9\7\2\4\6\f\0\7\1\7\b\4\7\4\0\0\e\0\5\e\0\3\f ]] 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:09.379 14:38:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:09.638 [2024-12-10 14:38:34.238515] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:09.639 [2024-12-10 14:38:34.238835] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85806 ] 00:33:09.639 [2024-12-10 14:38:34.420035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.898 [2024-12-10 14:38:34.546855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.812  [2024-12-10T14:38:36.906Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-10T14:38:40.198Z] Copying: 1024/1024 [MB] (average 647 MBps) 00:33:15.364 00:33:15.364 14:38:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:15.364 14:38:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3ff2a25b39d13d62ef954cc7c31c8a1a 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ff2a25b39d13d62ef954cc7c31c8a1a != \3\f\f\2\a\2\5\b\3\9\d\1\3\d\6\2\e\f\9\5\4\c\c\7\c\3\1\c\8\a\1\a ]] 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85648 ]] 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85648 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85891 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85891 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85891 ']' 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.267 14:38:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:17.267 [2024-12-10 14:38:41.921018] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:17.267 [2024-12-10 14:38:41.921146] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85891 ] 00:33:17.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85648 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:17.527 [2024-12-10 14:38:42.108619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.527 [2024-12-10 14:38:42.212319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.464 [2024-12-10 14:38:43.150292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:18.464 [2024-12-10 14:38:43.150586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:18.724 [2024-12-10 14:38:43.295921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.724 [2024-12-10 14:38:43.295961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:18.724 [2024-12-10 14:38:43.295976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:18.724 [2024-12-10 14:38:43.295985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.724 [2024-12-10 14:38:43.296037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.296048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:18.725 [2024-12-10 14:38:43.296058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:18.725 [2024-12-10 14:38:43.296067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.296090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:18.725 [2024-12-10 14:38:43.297044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:18.725 [2024-12-10 14:38:43.297066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.297077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:18.725 [2024-12-10 14:38:43.297088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:33:18.725 [2024-12-10 14:38:43.297098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.297433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:18.725 [2024-12-10 14:38:43.320451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.320487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:18.725 [2024-12-10 14:38:43.320502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.055 ms 00:33:18.725 [2024-12-10 14:38:43.320527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.334153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.334191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:18.725 [2024-12-10 14:38:43.334203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:18.725 [2024-12-10 14:38:43.334212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.334654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.334668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:18.725 [2024-12-10 14:38:43.334700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:33:18.725 [2024-12-10 14:38:43.334709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.334764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.334777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:18.725 [2024-12-10 14:38:43.334787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:33:18.725 [2024-12-10 14:38:43.334796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.334819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.334829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:18.725 [2024-12-10 14:38:43.334839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:18.725 [2024-12-10 14:38:43.334866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.334886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:18.725 [2024-12-10 14:38:43.338608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.338771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:18.725 [2024-12-10 14:38:43.338791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.732 ms 00:33:18.725 [2024-12-10 14:38:43.338818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.338860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.338872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:18.725 [2024-12-10 14:38:43.338882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:18.725 [2024-12-10 14:38:43.338891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.338926] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:18.725 [2024-12-10 14:38:43.338950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:18.725 [2024-12-10 14:38:43.338984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:18.725 [2024-12-10 14:38:43.339004] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:18.725 [2024-12-10 14:38:43.339091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:18.725 [2024-12-10 14:38:43.339105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:18.725 [2024-12-10 14:38:43.339118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:18.725 [2024-12-10 14:38:43.339131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339154] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:18.725 [2024-12-10 14:38:43.339163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:18.725 [2024-12-10 14:38:43.339173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:18.725 [2024-12-10 14:38:43.339182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:18.725 [2024-12-10 14:38:43.339194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.339204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:18.725 [2024-12-10 14:38:43.339214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:33:18.725 [2024-12-10 14:38:43.339224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.339294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.725 [2024-12-10 14:38:43.339305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:18.725 [2024-12-10 14:38:43.339315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:33:18.725 [2024-12-10 14:38:43.339324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.725 [2024-12-10 14:38:43.339407] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:18.725 [2024-12-10 14:38:43.339422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:18.725 [2024-12-10 14:38:43.339433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:18.725 [2024-12-10 14:38:43.339462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:18.725 [2024-12-10 14:38:43.339481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:18.725 [2024-12-10 14:38:43.339491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:18.725 [2024-12-10 14:38:43.339500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:18.725 [2024-12-10 14:38:43.339518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:18.725 [2024-12-10 14:38:43.339527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:18.725 [2024-12-10 14:38:43.339547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:18.725 [2024-12-10 14:38:43.339555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:18.725 [2024-12-10 14:38:43.339573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:18.725 [2024-12-10 14:38:43.339582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:18.725 [2024-12-10 14:38:43.339600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:18.725 [2024-12-10 14:38:43.339618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:18.725 [2024-12-10 14:38:43.339637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:18.725 [2024-12-10 14:38:43.339646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:18.725 [2024-12-10 14:38:43.339664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:18.725 [2024-12-10 14:38:43.339673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:18.725 [2024-12-10 14:38:43.339706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:18.725 [2024-12-10 14:38:43.339714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:18.725 [2024-12-10 14:38:43.339732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:18.725 [2024-12-10 14:38:43.339741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:18.725 [2024-12-10 14:38:43.339759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:18.725 [2024-12-10 14:38:43.339785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:18.725 [2024-12-10 14:38:43.339812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:18.725 [2024-12-10 14:38:43.339820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339829] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:18.725 [2024-12-10 14:38:43.339839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:18.725 [2024-12-10 14:38:43.339849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:18.725 [2024-12-10 14:38:43.339858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.725 [2024-12-10 14:38:43.339867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:18.726 [2024-12-10 14:38:43.339876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:18.726 [2024-12-10 14:38:43.339885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:18.726 [2024-12-10 14:38:43.339894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:18.726 [2024-12-10 14:38:43.339902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:18.726 [2024-12-10 14:38:43.339911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:18.726 [2024-12-10 14:38:43.339921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:18.726 [2024-12-10 14:38:43.339933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.339943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:18.726 [2024-12-10 14:38:43.339953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.339963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.339973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:18.726 [2024-12-10 14:38:43.339983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:18.726 [2024-12-10 14:38:43.339993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:18.726 [2024-12-10 14:38:43.340003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:18.726 [2024-12-10 14:38:43.340013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:18.726 [2024-12-10 14:38:43.340084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:18.726 [2024-12-10 14:38:43.340095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:18.726 [2024-12-10 14:38:43.340120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:18.726 [2024-12-10 14:38:43.340131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:18.726 [2024-12-10 14:38:43.340141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:18.726 [2024-12-10 14:38:43.340152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.340161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:18.726 [2024-12-10 14:38:43.340171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.799 ms 00:33:18.726 [2024-12-10 14:38:43.340191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.374293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.374447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:18.726 [2024-12-10 14:38:43.374484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.113 ms 00:33:18.726 [2024-12-10 14:38:43.374495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.374533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.374543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:18.726 [2024-12-10 14:38:43.374553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:18.726 [2024-12-10 14:38:43.374563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.419016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.419051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:18.726 [2024-12-10 14:38:43.419063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.469 ms 00:33:18.726 [2024-12-10 14:38:43.419074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.419101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.419112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:18.726 [2024-12-10 14:38:43.419122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:18.726 [2024-12-10 14:38:43.419136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.419255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.419267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:18.726 [2024-12-10 14:38:43.419277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:33:18.726 [2024-12-10 14:38:43.419286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.419324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.419335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:18.726 [2024-12-10 14:38:43.419345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:18.726 [2024-12-10 14:38:43.419353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.439851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.439886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:18.726 [2024-12-10 14:38:43.439898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.505 ms 00:33:18.726 [2024-12-10 14:38:43.439912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.440016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.440031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:18.726 [2024-12-10 14:38:43.440042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:18.726 [2024-12-10 14:38:43.440051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.474617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.474656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:18.726 [2024-12-10 14:38:43.474687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.601 ms 00:33:18.726 [2024-12-10 14:38:43.474698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.726 [2024-12-10 14:38:43.488468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.726 [2024-12-10 14:38:43.488608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:18.726 [2024-12-10 14:38:43.488652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:33:18.726 [2024-12-10 14:38:43.488664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.570066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.570123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:18.986 [2024-12-10 14:38:43.570138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.462 ms 00:33:18.986 [2024-12-10 14:38:43.570148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.570305] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:18.986 [2024-12-10 14:38:43.570419] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:18.986 [2024-12-10 14:38:43.570520] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:18.986 [2024-12-10 14:38:43.570613] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:18.986 [2024-12-10 14:38:43.570625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.570636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:18.986 [2024-12-10 14:38:43.570646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:33:18.986 [2024-12-10 14:38:43.570656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.570757] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:18.986 [2024-12-10 14:38:43.570772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.570785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:18.986 [2024-12-10 14:38:43.570795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:18.986 [2024-12-10 14:38:43.570804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.591422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.591465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:18.986 [2024-12-10 14:38:43.591478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.629 ms 00:33:18.986 [2024-12-10 14:38:43.591488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.604260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.604296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:18.986 [2024-12-10 14:38:43.604309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:18.986 [2024-12-10 14:38:43.604318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.986 [2024-12-10 14:38:43.604402] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:18.986 [2024-12-10 14:38:43.604581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.986 [2024-12-10 14:38:43.604592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:18.986 [2024-12-10 14:38:43.604602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:33:18.986 [2024-12-10 14:38:43.604611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.555 [2024-12-10 14:38:44.184008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.555 [2024-12-10 14:38:44.184195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:19.555 [2024-12-10 14:38:44.184230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 579.230 ms 00:33:19.555 [2024-12-10 14:38:44.184242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.556 [2024-12-10 14:38:44.189962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.556 [2024-12-10 14:38:44.190002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:19.556 [2024-12-10 14:38:44.190015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.338 ms 00:33:19.556 [2024-12-10 14:38:44.190026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.556 [2024-12-10 14:38:44.190565] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:19.556 [2024-12-10 14:38:44.190594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.556 [2024-12-10 14:38:44.190605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:19.556 [2024-12-10 14:38:44.190616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:33:19.556 [2024-12-10 14:38:44.190626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.556 [2024-12-10 14:38:44.190657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.556 [2024-12-10 14:38:44.190680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:19.556 [2024-12-10 14:38:44.190691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:19.556 [2024-12-10 14:38:44.190706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.556 [2024-12-10 14:38:44.190740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 587.290 ms, result 0 00:33:19.556 [2024-12-10 14:38:44.190780] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:19.556 [2024-12-10 14:38:44.190882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.556 [2024-12-10 14:38:44.190892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:19.556 [2024-12-10 14:38:44.190912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.103 ms 00:33:19.556 [2024-12-10 14:38:44.190921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.775924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.775971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:20.125 [2024-12-10 14:38:44.776015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 584.743 ms 00:33:20.125 [2024-12-10 14:38:44.776025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.781750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.781896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:20.125 [2024-12-10 14:38:44.781932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.311 ms 00:33:20.125 [2024-12-10 14:38:44.781944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.782517] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:20.125 [2024-12-10 14:38:44.782538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.782548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:20.125 [2024-12-10 14:38:44.782558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:33:20.125 [2024-12-10 14:38:44.782568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.782598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.782609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:20.125 [2024-12-10 14:38:44.782619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:20.125 [2024-12-10 14:38:44.782628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.782664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 592.843 ms, result 0 00:33:20.125 [2024-12-10 14:38:44.782722] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:20.125 [2024-12-10 14:38:44.782735] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:20.125 [2024-12-10 14:38:44.782747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.782757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:20.125 [2024-12-10 14:38:44.782767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1180.280 ms 00:33:20.125 [2024-12-10 14:38:44.782777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.782806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.782821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:20.125 [2024-12-10 14:38:44.782832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:20.125 [2024-12-10 14:38:44.782842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.793558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:20.125 [2024-12-10 14:38:44.793828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.793847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:20.125 [2024-12-10 14:38:44.793858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.987 ms 00:33:20.125 [2024-12-10 14:38:44.793868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.794428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.794442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:20.125 [2024-12-10 14:38:44.794456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.490 ms 00:33:20.125 [2024-12-10 14:38:44.794465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:20.125 [2024-12-10 14:38:44.796479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.954 ms 00:33:20.125 [2024-12-10 14:38:44.796487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:20.125 [2024-12-10 14:38:44.796561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:20.125 [2024-12-10 14:38:44.796575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:20.125 [2024-12-10 14:38:44.796688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:20.125 [2024-12-10 14:38:44.796720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:20.125 [2024-12-10 14:38:44.796759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:20.125 [2024-12-10 14:38:44.796768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796803] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:20.125 [2024-12-10 14:38:44.796815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:20.125 [2024-12-10 14:38:44.796856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:20.125 [2024-12-10 14:38:44.796865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.125 [2024-12-10 14:38:44.796909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.125 [2024-12-10 14:38:44.796919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:20.126 [2024-12-10 14:38:44.796928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:33:20.126 [2024-12-10 14:38:44.796937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.126 [2024-12-10 14:38:44.797897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1503.985 ms, result 0 00:33:20.126 [2024-12-10 14:38:44.810204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.126 [2024-12-10 14:38:44.826188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:20.126 [2024-12-10 14:38:44.835158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:20.126 Validate MD5 checksum, iteration 1 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:20.126 14:38:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:20.385 [2024-12-10 14:38:44.984232] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:20.385 [2024-12-10 14:38:44.984725] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85930 ] 00:33:20.385 [2024-12-10 14:38:45.169496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.644 [2024-12-10 14:38:45.315677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.550  [2024-12-10T14:38:47.951Z] Copying: 636/1024 [MB] (636 MBps) [2024-12-10T14:38:49.329Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:33:24.495 00:33:24.495 14:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:24.495 14:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:26.401 Validate MD5 checksum, iteration 2 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=89ff394af97246f0717b47400e05e03f 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 89ff394af97246f0717b47400e05e03f != \8\9\f\f\3\9\4\a\f\9\7\2\4\6\f\0\7\1\7\b\4\7\4\0\0\e\0\5\e\0\3\f ]] 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:26.401 14:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:26.401 [2024-12-10 14:38:51.087622] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:26.401 [2024-12-10 14:38:51.087935] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85998 ] 00:33:26.660 [2024-12-10 14:38:51.272100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.660 [2024-12-10 14:38:51.399695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.564  [2024-12-10T14:38:53.965Z] Copying: 630/1024 [MB] (630 MBps) [2024-12-10T14:38:54.972Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:33:30.138 00:33:30.138 14:38:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:30.138 14:38:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3ff2a25b39d13d62ef954cc7c31c8a1a 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ff2a25b39d13d62ef954cc7c31c8a1a != \3\f\f\2\a\2\5\b\3\9\d\1\3\d\6\2\e\f\9\5\4\c\c\7\c\3\1\c\8\a\1\a ]] 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85891 ]] 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85891 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85891 ']' 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85891 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85891 00:33:32.049 killing process with pid 85891 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85891' 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85891 00:33:32.049 14:38:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85891 00:33:33.430 [2024-12-10 14:38:57.920460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:33.430 [2024-12-10 14:38:57.940062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.940100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:33.430 [2024-12-10 14:38:57.940115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:33.430 [2024-12-10 14:38:57.940125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.940146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:33.430 [2024-12-10 14:38:57.943934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.943970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:33.430 [2024-12-10 14:38:57.943982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.779 ms 00:33:33.430 [2024-12-10 14:38:57.943991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.944181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.944193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:33.430 [2024-12-10 14:38:57.944203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.167 ms 00:33:33.430 [2024-12-10 14:38:57.944213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.945442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.945479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:33.430 [2024-12-10 14:38:57.945490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.216 ms 00:33:33.430 [2024-12-10 14:38:57.945505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.946405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.946437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:33.430 [2024-12-10 14:38:57.946448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:33:33.430 [2024-12-10 14:38:57.946457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.960452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.960490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:33.430 [2024-12-10 14:38:57.960507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.969 ms 00:33:33.430 [2024-12-10 14:38:57.960533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.968257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.968289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:33.430 [2024-12-10 14:38:57.968301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.701 ms 00:33:33.430 [2024-12-10 14:38:57.968310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.968391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.968403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:33.430 [2024-12-10 14:38:57.968413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:33:33.430 [2024-12-10 14:38:57.968428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.982672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.982707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:33.430 [2024-12-10 14:38:57.982719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.252 ms 00:33:33.430 [2024-12-10 14:38:57.982728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:57.997062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:57.997097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:33.430 [2024-12-10 14:38:57.997108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.324 ms 00:33:33.430 [2024-12-10 14:38:57.997117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:58.011131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:58.011285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:33.430 [2024-12-10 14:38:58.011306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.002 ms 00:33:33.430 [2024-12-10 14:38:58.011315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:58.024984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.430 [2024-12-10 14:38:58.025016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:33.430 [2024-12-10 14:38:58.025027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.581 ms 00:33:33.430 [2024-12-10 14:38:58.025037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.430 [2024-12-10 14:38:58.025068] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:33.430 [2024-12-10 14:38:58.025083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:33.430 [2024-12-10 14:38:58.025095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:33.430 [2024-12-10 14:38:58.025105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:33.430 [2024-12-10 14:38:58.025116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:33.430 [2024-12-10 14:38:58.025267] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:33.430 [2024-12-10 14:38:58.025277] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 12f70387-fa62-43ad-bf58-6b3870524c55 00:33:33.430 [2024-12-10 14:38:58.025286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:33.430 [2024-12-10 14:38:58.025295] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:33.430 [2024-12-10 14:38:58.025304] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:33.430 [2024-12-10 14:38:58.025313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:33.430 [2024-12-10 14:38:58.025322] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:33.430 [2024-12-10 14:38:58.025331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:33.430 [2024-12-10 14:38:58.025346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:33.430 [2024-12-10 14:38:58.025355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:33.431 [2024-12-10 14:38:58.025364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:33.431 [2024-12-10 14:38:58.025375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.431 [2024-12-10 14:38:58.025385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:33.431 [2024-12-10 14:38:58.025396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:33:33.431 [2024-12-10 14:38:58.025405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.043737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.431 [2024-12-10 14:38:58.043769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:33.431 [2024-12-10 14:38:58.043782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.345 ms 00:33:33.431 [2024-12-10 14:38:58.043791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.044264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.431 [2024-12-10 14:38:58.044276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:33.431 [2024-12-10 14:38:58.044287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:33:33.431 [2024-12-10 14:38:58.044297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.106023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.431 [2024-12-10 14:38:58.106171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:33.431 [2024-12-10 14:38:58.106192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.431 [2024-12-10 14:38:58.106208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.106238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.431 [2024-12-10 14:38:58.106248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:33.431 [2024-12-10 14:38:58.106258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.431 [2024-12-10 14:38:58.106268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.106343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.431 [2024-12-10 14:38:58.106356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:33.431 [2024-12-10 14:38:58.106366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.431 [2024-12-10 14:38:58.106375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.106396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.431 [2024-12-10 14:38:58.106406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:33.431 [2024-12-10 14:38:58.106416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.431 [2024-12-10 14:38:58.106426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.431 [2024-12-10 14:38:58.223554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.431 [2024-12-10 14:38:58.223600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:33.431 [2024-12-10 14:38:58.223616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.431 [2024-12-10 14:38:58.223627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:33.690 [2024-12-10 14:38:58.315390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:33.690 [2024-12-10 14:38:58.315513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:33.690 [2024-12-10 14:38:58.315603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:33.690 [2024-12-10 14:38:58.315781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:33.690 [2024-12-10 14:38:58.315871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.315927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:33.690 [2024-12-10 14:38:58.315937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.315946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.315986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:33.690 [2024-12-10 14:38:58.316001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:33.690 [2024-12-10 14:38:58.316011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:33.690 [2024-12-10 14:38:58.316021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.690 [2024-12-10 14:38:58.316130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 376.648 ms, result 0 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:35.070 Remove shared memory files 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85648 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:35.070 ************************************ 00:33:35.070 END TEST ftl_upgrade_shutdown 00:33:35.070 ************************************ 00:33:35.070 00:33:35.070 real 1m29.133s 00:33:35.070 user 1m58.496s 00:33:35.070 sys 0m26.770s 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.070 14:38:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@14 -- # killprocess 78010 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 78010 ']' 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@958 -- # kill -0 78010 00:33:35.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78010) - No such process 00:33:35.070 Process with pid 78010 is not found 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 78010 is not found' 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86118 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:35.070 14:38:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86118 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 86118 ']' 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.070 14:38:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:35.070 [2024-12-10 14:38:59.733838] Starting SPDK v25.01-pre git sha1 4cd130da1 / DPDK 24.03.0 initialization... 00:33:35.070 [2024-12-10 14:38:59.733976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86118 ] 00:33:35.330 [2024-12-10 14:38:59.921710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.330 [2024-12-10 14:39:00.031042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.266 14:39:00 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.266 14:39:00 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:36.266 14:39:00 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:36.525 nvme0n1 00:33:36.525 14:39:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:36.525 14:39:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:36.525 14:39:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:36.525 14:39:01 ftl -- ftl/common.sh@28 -- # stores=6af93b64-27a6-4fbb-8e5d-25ee6d4ed6d4 00:33:36.525 14:39:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:36.525 14:39:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6af93b64-27a6-4fbb-8e5d-25ee6d4ed6d4 00:33:36.784 14:39:01 ftl -- ftl/ftl.sh@23 -- # killprocess 86118 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 86118 ']' 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@958 -- # kill -0 86118 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@959 -- # uname 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86118 00:33:36.784 killing process with pid 86118 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86118' 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@973 -- # kill 86118 00:33:36.784 14:39:01 ftl -- common/autotest_common.sh@978 -- # wait 86118 00:33:39.322 14:39:03 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:39.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:39.581 Waiting for block devices as requested 00:33:39.841 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:39.841 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:39.841 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:40.100 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:45.376 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:45.376 Remove shared memory files 00:33:45.376 14:39:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:45.376 14:39:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:45.376 14:39:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:45.376 14:39:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:45.376 14:39:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:45.376 14:39:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:45.376 14:39:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:45.376 ************************************ 00:33:45.376 END TEST ftl 00:33:45.376 ************************************ 00:33:45.376 00:33:45.376 real 12m1.293s 00:33:45.376 user 14m28.528s 00:33:45.376 sys 1m42.914s 00:33:45.376 14:39:09 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.376 14:39:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:45.376 14:39:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:45.376 14:39:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:45.376 14:39:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:45.376 14:39:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:45.376 14:39:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:45.376 14:39:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:45.376 14:39:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:45.376 14:39:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:45.376 14:39:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:45.376 14:39:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:45.376 14:39:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.376 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:33:45.376 14:39:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:45.376 14:39:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:45.376 14:39:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:45.376 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:33:47.912 INFO: APP EXITING 00:33:47.912 INFO: killing all VMs 00:33:47.912 INFO: killing vhost app 00:33:47.912 INFO: EXIT DONE 00:33:47.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:48.481 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:48.481 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:48.481 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:48.481 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:49.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:49.620 Cleaning 00:33:49.620 Removing: /var/run/dpdk/spdk0/config 00:33:49.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:49.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:49.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:49.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:49.620 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:49.620 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:49.620 Removing: /var/run/dpdk/spdk0 00:33:49.620 Removing: /var/run/dpdk/spdk_pid58731 00:33:49.620 Removing: /var/run/dpdk/spdk_pid58966 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59200 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59310 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59366 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59499 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59523 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59733 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59839 00:33:49.620 Removing: /var/run/dpdk/spdk_pid59947 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60074 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60182 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60221 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60258 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60334 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60456 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60904 00:33:49.620 Removing: /var/run/dpdk/spdk_pid60981 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61061 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61078 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61234 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61250 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61407 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61424 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61492 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61516 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61580 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61604 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61806 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61848 00:33:49.620 Removing: /var/run/dpdk/spdk_pid61937 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62131 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62226 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62274 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62728 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62826 00:33:49.620 Removing: /var/run/dpdk/spdk_pid62953 00:33:49.620 Removing: /var/run/dpdk/spdk_pid63006 00:33:49.620 Removing: /var/run/dpdk/spdk_pid63037 00:33:49.620 Removing: /var/run/dpdk/spdk_pid63121 00:33:49.880 Removing: /var/run/dpdk/spdk_pid63769 00:33:49.880 Removing: /var/run/dpdk/spdk_pid63818 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64313 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64417 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64537 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64590 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64621 00:33:49.880 Removing: /var/run/dpdk/spdk_pid64647 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66551 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66699 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66703 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66721 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66765 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66769 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66781 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66831 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66835 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66847 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66892 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66896 00:33:49.880 Removing: /var/run/dpdk/spdk_pid66908 00:33:49.880 Removing: /var/run/dpdk/spdk_pid68340 00:33:49.880 Removing: /var/run/dpdk/spdk_pid68449 00:33:49.880 Removing: /var/run/dpdk/spdk_pid69880 00:33:49.880 Removing: /var/run/dpdk/spdk_pid71624 00:33:49.880 Removing: /var/run/dpdk/spdk_pid71715 00:33:49.880 Removing: /var/run/dpdk/spdk_pid71790 00:33:49.880 Removing: /var/run/dpdk/spdk_pid71901 00:33:49.880 Removing: /var/run/dpdk/spdk_pid71999 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72096 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72181 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72262 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72372 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72469 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72571 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72663 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72738 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72849 00:33:49.880 Removing: /var/run/dpdk/spdk_pid72946 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73055 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73137 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73219 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73333 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73426 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73527 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73612 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73693 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73776 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73853 00:33:49.880 Removing: /var/run/dpdk/spdk_pid73966 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74064 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74165 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74249 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74330 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74408 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74489 00:33:49.880 Removing: /var/run/dpdk/spdk_pid74598 00:33:50.140 Removing: /var/run/dpdk/spdk_pid74694 00:33:50.140 Removing: /var/run/dpdk/spdk_pid74849 00:33:50.140 Removing: /var/run/dpdk/spdk_pid75151 00:33:50.140 Removing: /var/run/dpdk/spdk_pid75194 00:33:50.140 Removing: /var/run/dpdk/spdk_pid75653 00:33:50.140 Removing: /var/run/dpdk/spdk_pid75837 00:33:50.140 Removing: /var/run/dpdk/spdk_pid75945 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76061 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76122 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76153 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76457 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76534 00:33:50.140 Removing: /var/run/dpdk/spdk_pid76625 00:33:50.140 Removing: /var/run/dpdk/spdk_pid77051 00:33:50.140 Removing: /var/run/dpdk/spdk_pid77204 00:33:50.140 Removing: /var/run/dpdk/spdk_pid78010 00:33:50.140 Removing: /var/run/dpdk/spdk_pid78165 00:33:50.140 Removing: /var/run/dpdk/spdk_pid78373 00:33:50.140 Removing: /var/run/dpdk/spdk_pid78487 00:33:50.140 Removing: /var/run/dpdk/spdk_pid78814 00:33:50.140 Removing: /var/run/dpdk/spdk_pid79098 00:33:50.140 Removing: /var/run/dpdk/spdk_pid79456 00:33:50.140 Removing: /var/run/dpdk/spdk_pid79661 00:33:50.140 Removing: /var/run/dpdk/spdk_pid79815 00:33:50.140 Removing: /var/run/dpdk/spdk_pid79887 00:33:50.140 Removing: /var/run/dpdk/spdk_pid80039 00:33:50.140 Removing: /var/run/dpdk/spdk_pid80070 00:33:50.140 Removing: /var/run/dpdk/spdk_pid80135 00:33:50.140 Removing: /var/run/dpdk/spdk_pid80355 00:33:50.140 Removing: /var/run/dpdk/spdk_pid80625 00:33:50.140 Removing: /var/run/dpdk/spdk_pid81104 00:33:50.140 Removing: /var/run/dpdk/spdk_pid81584 00:33:50.140 Removing: /var/run/dpdk/spdk_pid82074 00:33:50.140 Removing: /var/run/dpdk/spdk_pid82627 00:33:50.140 Removing: /var/run/dpdk/spdk_pid82783 00:33:50.140 Removing: /var/run/dpdk/spdk_pid82877 00:33:50.140 Removing: /var/run/dpdk/spdk_pid83563 00:33:50.140 Removing: /var/run/dpdk/spdk_pid83632 00:33:50.140 Removing: /var/run/dpdk/spdk_pid84134 00:33:50.140 Removing: /var/run/dpdk/spdk_pid84515 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85068 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85200 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85254 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85318 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85374 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85438 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85648 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85738 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85806 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85891 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85930 00:33:50.140 Removing: /var/run/dpdk/spdk_pid85998 00:33:50.140 Removing: /var/run/dpdk/spdk_pid86118 00:33:50.403 Clean 00:33:50.403 14:39:15 -- common/autotest_common.sh@1453 -- # return 0 00:33:50.403 14:39:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:50.403 14:39:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.403 14:39:15 -- common/autotest_common.sh@10 -- # set +x 00:33:50.403 14:39:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:50.403 14:39:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.403 14:39:15 -- common/autotest_common.sh@10 -- # set +x 00:33:50.403 14:39:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:50.403 14:39:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:50.403 14:39:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:50.403 14:39:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:50.403 14:39:15 -- spdk/autotest.sh@398 -- # hostname 00:33:50.403 14:39:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:50.664 geninfo: WARNING: invalid characters removed from testname! 00:34:22.811 14:39:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:22.811 14:39:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:24.187 14:39:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:26.720 14:39:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:28.633 14:39:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:31.171 14:39:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:33.076 14:39:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:33.076 14:39:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:33.076 14:39:57 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:33.076 14:39:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:33.076 14:39:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:33.076 14:39:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:33.076 + [[ -n 5241 ]] 00:34:33.076 + sudo kill 5241 00:34:33.086 [Pipeline] } 00:34:33.102 [Pipeline] // timeout 00:34:33.107 [Pipeline] } 00:34:33.121 [Pipeline] // stage 00:34:33.126 [Pipeline] } 00:34:33.140 [Pipeline] // catchError 00:34:33.149 [Pipeline] stage 00:34:33.151 [Pipeline] { (Stop VM) 00:34:33.163 [Pipeline] sh 00:34:33.450 + vagrant halt 00:34:36.741 ==> default: Halting domain... 00:34:43.328 [Pipeline] sh 00:34:43.611 + vagrant destroy -f 00:34:46.147 ==> default: Removing domain... 00:34:46.420 [Pipeline] sh 00:34:46.702 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:46.711 [Pipeline] } 00:34:46.726 [Pipeline] // stage 00:34:46.731 [Pipeline] } 00:34:46.745 [Pipeline] // dir 00:34:46.750 [Pipeline] } 00:34:46.765 [Pipeline] // wrap 00:34:46.771 [Pipeline] } 00:34:46.783 [Pipeline] // catchError 00:34:46.793 [Pipeline] stage 00:34:46.795 [Pipeline] { (Epilogue) 00:34:46.808 [Pipeline] sh 00:34:47.147 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:52.437 [Pipeline] catchError 00:34:52.439 [Pipeline] { 00:34:52.451 [Pipeline] sh 00:34:52.735 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:52.993 Artifacts sizes are good 00:34:53.001 [Pipeline] } 00:34:53.014 [Pipeline] // catchError 00:34:53.023 [Pipeline] archiveArtifacts 00:34:53.029 Archiving artifacts 00:34:53.130 [Pipeline] cleanWs 00:34:53.141 [WS-CLEANUP] Deleting project workspace... 00:34:53.141 [WS-CLEANUP] Deferred wipeout is used... 00:34:53.147 [WS-CLEANUP] done 00:34:53.149 [Pipeline] } 00:34:53.163 [Pipeline] // stage 00:34:53.169 [Pipeline] } 00:34:53.182 [Pipeline] // node 00:34:53.187 [Pipeline] End of Pipeline 00:34:53.227 Finished: SUCCESS