00:00:00.001 Started by upstream project "autotest-per-patch" build number 132758 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.166 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.210 Using shallow fetch with depth 1 00:00:00.210 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.210 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.935 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.947 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.958 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.958 > git config core.sparsecheckout # timeout=10 00:00:08.970 > git read-tree -mu HEAD # timeout=10 00:00:08.985 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.006 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.007 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.105 [Pipeline] Start of Pipeline 00:00:09.116 [Pipeline] library 00:00:09.118 Loading library shm_lib@master 00:00:09.118 Library shm_lib@master is cached. Copying from home. 00:00:09.129 [Pipeline] node 00:00:09.140 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:09.142 [Pipeline] { 00:00:09.151 [Pipeline] catchError 00:00:09.153 [Pipeline] { 00:00:09.164 [Pipeline] wrap 00:00:09.172 [Pipeline] { 00:00:09.178 [Pipeline] stage 00:00:09.179 [Pipeline] { (Prologue) 00:00:09.196 [Pipeline] echo 00:00:09.198 Node: VM-host-SM38 00:00:09.205 [Pipeline] cleanWs 00:00:09.214 [WS-CLEANUP] Deleting project workspace... 00:00:09.214 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.221 [WS-CLEANUP] done 00:00:09.433 [Pipeline] setCustomBuildProperty 00:00:09.529 [Pipeline] httpRequest 00:00:10.205 [Pipeline] echo 00:00:10.206 Sorcerer 10.211.164.101 is alive 00:00:10.217 [Pipeline] retry 00:00:10.219 [Pipeline] { 00:00:10.236 [Pipeline] httpRequest 00:00:10.242 HttpMethod: GET 00:00:10.242 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.242 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.267 Response Code: HTTP/1.1 200 OK 00:00:10.267 Success: Status code 200 is in the accepted range: 200,404 00:00:10.268 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.235 [Pipeline] } 00:00:30.253 [Pipeline] // retry 00:00:30.260 [Pipeline] sh 00:00:30.547 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.566 [Pipeline] httpRequest 00:00:30.988 [Pipeline] echo 00:00:30.990 Sorcerer 10.211.164.101 is alive 00:00:31.000 [Pipeline] retry 00:00:31.003 [Pipeline] { 00:00:31.018 [Pipeline] httpRequest 00:00:31.023 HttpMethod: GET 00:00:31.024 URL: http://10.211.164.101/packages/spdk_0f59982b6c10cb8b2b5e58f8dc0111185aa011e0.tar.gz 00:00:31.025 Sending request to url: http://10.211.164.101/packages/spdk_0f59982b6c10cb8b2b5e58f8dc0111185aa011e0.tar.gz 00:00:31.028 Response Code: HTTP/1.1 200 OK 00:00:31.028 Success: Status code 200 is in the accepted range: 200,404 00:00:31.029 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_0f59982b6c10cb8b2b5e58f8dc0111185aa011e0.tar.gz 00:02:04.396 [Pipeline] } 00:02:04.415 [Pipeline] // retry 00:02:04.425 [Pipeline] sh 00:02:04.715 + tar --no-same-owner -xf spdk_0f59982b6c10cb8b2b5e58f8dc0111185aa011e0.tar.gz 00:02:08.047 [Pipeline] sh 00:02:08.332 + git -C spdk log --oneline -n5 00:02:08.332 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:02:08.332 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:02:08.332 0ea9ac02f accel/mlx5: Create pool of UMRs 00:02:08.332 60adca7e1 lib/mlx5: API to configure UMR 00:02:08.332 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:02:08.353 [Pipeline] writeFile 00:02:08.367 [Pipeline] sh 00:02:08.651 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:08.662 [Pipeline] sh 00:02:08.947 + cat autorun-spdk.conf 00:02:08.947 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.947 SPDK_TEST_NVME=1 00:02:08.947 SPDK_TEST_FTL=1 00:02:08.947 SPDK_TEST_ISAL=1 00:02:08.947 SPDK_RUN_ASAN=1 00:02:08.947 SPDK_RUN_UBSAN=1 00:02:08.947 SPDK_TEST_XNVME=1 00:02:08.947 SPDK_TEST_NVME_FDP=1 00:02:08.947 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.955 RUN_NIGHTLY=0 00:02:08.957 [Pipeline] } 00:02:08.973 [Pipeline] // stage 00:02:08.988 [Pipeline] stage 00:02:08.991 [Pipeline] { (Run VM) 00:02:09.004 [Pipeline] sh 00:02:09.285 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:09.285 + echo 'Start stage prepare_nvme.sh' 00:02:09.285 Start stage prepare_nvme.sh 00:02:09.285 + [[ -n 2 ]] 00:02:09.285 + disk_prefix=ex2 00:02:09.285 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:02:09.285 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:02:09.285 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:02:09.285 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.285 ++ SPDK_TEST_NVME=1 00:02:09.285 ++ SPDK_TEST_FTL=1 00:02:09.285 ++ SPDK_TEST_ISAL=1 00:02:09.285 ++ SPDK_RUN_ASAN=1 00:02:09.285 ++ SPDK_RUN_UBSAN=1 00:02:09.285 ++ SPDK_TEST_XNVME=1 00:02:09.285 ++ SPDK_TEST_NVME_FDP=1 00:02:09.285 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.285 ++ RUN_NIGHTLY=0 00:02:09.285 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:02:09.285 + nvme_files=() 00:02:09.285 + declare -A nvme_files 00:02:09.285 + backend_dir=/var/lib/libvirt/images/backends 00:02:09.285 + nvme_files['nvme.img']=5G 00:02:09.286 + nvme_files['nvme-cmb.img']=5G 00:02:09.286 + nvme_files['nvme-multi0.img']=4G 00:02:09.286 + nvme_files['nvme-multi1.img']=4G 00:02:09.286 + nvme_files['nvme-multi2.img']=4G 00:02:09.286 + nvme_files['nvme-openstack.img']=8G 00:02:09.286 + nvme_files['nvme-zns.img']=5G 00:02:09.286 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:09.286 + (( SPDK_TEST_FTL == 1 )) 00:02:09.286 + nvme_files["nvme-ftl.img"]=6G 00:02:09.286 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:09.286 + nvme_files["nvme-fdp.img"]=1G 00:02:09.286 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:09.286 + for nvme in "${!nvme_files[@]}" 00:02:09.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:02:09.286 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:09.286 + for nvme in "${!nvme_files[@]}" 00:02:09.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:02:10.229 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:10.229 + for nvme in "${!nvme_files[@]}" 00:02:10.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:02:10.229 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.229 + for nvme in "${!nvme_files[@]}" 00:02:10.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:02:10.229 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:10.229 + for nvme in "${!nvme_files[@]}" 00:02:10.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:02:10.229 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.229 + for nvme in "${!nvme_files[@]}" 00:02:10.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:02:10.229 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.229 + for nvme in "${!nvme_files[@]}" 00:02:10.229 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:02:10.842 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.842 + for nvme in "${!nvme_files[@]}" 00:02:10.842 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:02:11.104 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:11.104 + for nvme in "${!nvme_files[@]}" 00:02:11.104 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:02:11.366 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.628 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:02:11.628 + echo 'End stage prepare_nvme.sh' 00:02:11.628 End stage prepare_nvme.sh 00:02:11.642 [Pipeline] sh 00:02:11.928 + DISTRO=fedora39 00:02:11.928 + CPUS=10 00:02:11.928 + RAM=12288 00:02:11.928 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.928 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.928 00:02:11.928 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:02:11.928 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:02:11.928 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:02:11.928 HELP=0 00:02:11.928 DRY_RUN=0 00:02:11.928 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:02:11.928 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.928 NVME_AUTO_CREATE=0 00:02:11.928 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:02:11.928 NVME_CMB=,,,, 00:02:11.928 NVME_PMR=,,,, 00:02:11.928 NVME_ZNS=,,,, 00:02:11.928 NVME_MS=true,,,, 00:02:11.928 NVME_FDP=,,,on, 00:02:11.928 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.928 SPDK_VAGRANT_VMCPU=10 00:02:11.928 SPDK_VAGRANT_VMRAM=12288 00:02:11.928 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.928 SPDK_VAGRANT_HTTP_PROXY= 00:02:11.928 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.928 SPDK_OPENSTACK_NETWORK=0 00:02:11.928 VAGRANT_PACKAGE_BOX=0 00:02:11.928 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:11.928 FORCE_DISTRO=true 00:02:11.928 VAGRANT_BOX_VERSION= 00:02:11.928 EXTRA_VAGRANTFILES= 00:02:11.928 NIC_MODEL=e1000 00:02:11.928 00:02:11.928 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:02:11.928 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:02:14.477 Bringing machine 'default' up with 'libvirt' provider... 00:02:15.050 ==> default: Creating image (snapshot of base box volume). 00:02:15.050 ==> default: Creating domain with the following settings... 00:02:15.050 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733521967_b1235d57ddb20075395d 00:02:15.050 ==> default: -- Domain type: kvm 00:02:15.050 ==> default: -- Cpus: 10 00:02:15.050 ==> default: -- Feature: acpi 00:02:15.050 ==> default: -- Feature: apic 00:02:15.050 ==> default: -- Feature: pae 00:02:15.050 ==> default: -- Memory: 12288M 00:02:15.050 ==> default: -- Memory Backing: hugepages: 00:02:15.050 ==> default: -- Management MAC: 00:02:15.050 ==> default: -- Loader: 00:02:15.050 ==> default: -- Nvram: 00:02:15.050 ==> default: -- Base box: spdk/fedora39 00:02:15.050 ==> default: -- Storage pool: default 00:02:15.050 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733521967_b1235d57ddb20075395d.img (20G) 00:02:15.050 ==> default: -- Volume Cache: default 00:02:15.051 ==> default: -- Kernel: 00:02:15.051 ==> default: -- Initrd: 00:02:15.051 ==> default: -- Graphics Type: vnc 00:02:15.051 ==> default: -- Graphics Port: -1 00:02:15.051 ==> default: -- Graphics IP: 127.0.0.1 00:02:15.051 ==> default: -- Graphics Password: Not defined 00:02:15.051 ==> default: -- Video Type: cirrus 00:02:15.051 ==> default: -- Video VRAM: 9216 00:02:15.051 ==> default: -- Sound Type: 00:02:15.051 ==> default: -- Keymap: en-us 00:02:15.051 ==> default: -- TPM Path: 00:02:15.051 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:15.051 ==> default: -- Command line args: 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:15.051 ==> default: -> value=-drive, 00:02:15.051 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:15.051 ==> default: -> value=-device, 00:02:15.051 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.312 ==> default: Creating shared folders metadata... 00:02:15.312 ==> default: Starting domain. 00:02:17.230 ==> default: Waiting for domain to get an IP address... 00:02:39.190 ==> default: Waiting for SSH to become available... 00:02:39.190 ==> default: Configuring and enabling network interfaces... 00:02:40.577 default: SSH address: 192.168.121.241:22 00:02:40.577 default: SSH username: vagrant 00:02:40.577 default: SSH auth method: private key 00:02:42.488 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:49.119 ==> default: Mounting SSHFS shared folder... 00:02:49.684 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:49.684 ==> default: Checking Mount.. 00:02:50.618 ==> default: Folder Successfully Mounted! 00:02:50.876 00:02:50.876 SUCCESS! 00:02:50.876 00:02:50.876 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:50.876 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:50.876 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:50.876 00:02:50.885 [Pipeline] } 00:02:50.901 [Pipeline] // stage 00:02:50.911 [Pipeline] dir 00:02:50.912 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:02:50.914 [Pipeline] { 00:02:50.929 [Pipeline] catchError 00:02:50.931 [Pipeline] { 00:02:50.943 [Pipeline] sh 00:02:51.223 + vagrant ssh-config --host vagrant 00:02:51.224 + sed -ne '/^Host/,$p' 00:02:51.224 + tee ssh_conf 00:02:53.802 Host vagrant 00:02:53.802 HostName 192.168.121.241 00:02:53.802 User vagrant 00:02:53.802 Port 22 00:02:53.802 UserKnownHostsFile /dev/null 00:02:53.802 StrictHostKeyChecking no 00:02:53.802 PasswordAuthentication no 00:02:53.802 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:53.802 IdentitiesOnly yes 00:02:53.802 LogLevel FATAL 00:02:53.802 ForwardAgent yes 00:02:53.802 ForwardX11 yes 00:02:53.802 00:02:53.815 [Pipeline] withEnv 00:02:53.817 [Pipeline] { 00:02:53.831 [Pipeline] sh 00:02:54.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:54.108 source /etc/os-release 00:02:54.108 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.108 # Minimal, systemd-like check. 00:02:54.108 if [[ -e /.dockerenv ]]; then 00:02:54.108 # Clear garbage from the node'\''s name: 00:02:54.108 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.108 # $HOSTNAME is the actual container id 00:02:54.108 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.108 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.108 # We can assume this is a mount from a host where container is running, 00:02:54.108 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.108 container="$(< /etc/hostname) ($agent)" 00:02:54.108 else 00:02:54.108 # Fallback 00:02:54.108 container=$agent 00:02:54.108 fi 00:02:54.108 fi 00:02:54.108 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.108 ' 00:02:54.117 [Pipeline] } 00:02:54.134 [Pipeline] // withEnv 00:02:54.142 [Pipeline] setCustomBuildProperty 00:02:54.157 [Pipeline] stage 00:02:54.159 [Pipeline] { (Tests) 00:02:54.177 [Pipeline] sh 00:02:54.453 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.466 [Pipeline] sh 00:02:54.742 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.756 [Pipeline] timeout 00:02:54.756 Timeout set to expire in 50 min 00:02:54.758 [Pipeline] { 00:02:54.773 [Pipeline] sh 00:02:55.049 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:55.614 HEAD is now at 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:02:55.625 [Pipeline] sh 00:02:55.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:56.172 [Pipeline] sh 00:02:56.449 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.720 [Pipeline] sh 00:02:57.007 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:57.007 ++ readlink -f spdk_repo 00:02:57.007 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.007 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.007 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.007 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.007 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.007 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.007 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.007 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:57.007 + cd /home/vagrant/spdk_repo 00:02:57.007 + source /etc/os-release 00:02:57.007 ++ NAME='Fedora Linux' 00:02:57.007 ++ VERSION='39 (Cloud Edition)' 00:02:57.007 ++ ID=fedora 00:02:57.007 ++ VERSION_ID=39 00:02:57.007 ++ VERSION_CODENAME= 00:02:57.007 ++ PLATFORM_ID=platform:f39 00:02:57.007 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:57.007 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.007 ++ LOGO=fedora-logo-icon 00:02:57.007 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:57.007 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.007 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:57.007 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.007 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.007 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.007 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:57.007 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.007 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:57.007 ++ SUPPORT_END=2024-11-12 00:02:57.007 ++ VARIANT='Cloud Edition' 00:02:57.007 ++ VARIANT_ID=cloud 00:02:57.007 + uname -a 00:02:57.007 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:57.007 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.656 Hugepages 00:02:57.656 node hugesize free / total 00:02:57.656 node0 1048576kB 0 / 0 00:02:57.656 node0 2048kB 0 / 0 00:02:57.656 00:02:57.656 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.914 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.914 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.914 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:57.914 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:57.914 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:57.914 + rm -f /tmp/spdk-ld-path 00:02:57.914 + source autorun-spdk.conf 00:02:57.914 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.914 ++ SPDK_TEST_NVME=1 00:02:57.914 ++ SPDK_TEST_FTL=1 00:02:57.914 ++ SPDK_TEST_ISAL=1 00:02:57.914 ++ SPDK_RUN_ASAN=1 00:02:57.914 ++ SPDK_RUN_UBSAN=1 00:02:57.914 ++ SPDK_TEST_XNVME=1 00:02:57.914 ++ SPDK_TEST_NVME_FDP=1 00:02:57.914 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.914 ++ RUN_NIGHTLY=0 00:02:57.914 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.914 + [[ -n '' ]] 00:02:57.914 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.914 + for M in /var/spdk/build-*-manifest.txt 00:02:57.914 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:57.914 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.914 + for M in /var/spdk/build-*-manifest.txt 00:02:57.914 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.914 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.914 + for M in /var/spdk/build-*-manifest.txt 00:02:57.914 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.914 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.914 ++ uname 00:02:57.914 + [[ Linux == \L\i\n\u\x ]] 00:02:57.914 + sudo dmesg -T 00:02:57.914 + sudo dmesg --clear 00:02:57.914 + dmesg_pid=5029 00:02:57.914 + sudo dmesg -Tw 00:02:57.915 + [[ Fedora Linux == FreeBSD ]] 00:02:57.915 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.915 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.915 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.915 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.915 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.915 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.915 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.915 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.915 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.915 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.915 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.915 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.915 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.915 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.915 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.915 21:53:30 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:57.915 21:53:30 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.915 21:53:30 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:57.915 21:53:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:57.915 21:53:30 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:58.174 21:53:30 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:58.174 21:53:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.174 21:53:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:58.174 21:53:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.174 21:53:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.174 21:53:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.174 21:53:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.174 21:53:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.174 21:53:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.174 21:53:30 -- paths/export.sh@5 -- $ export PATH 00:02:58.174 21:53:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.174 21:53:30 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.174 21:53:30 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:58.174 21:53:30 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733522010.XXXXXX 00:02:58.174 21:53:30 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733522010.zSLvf4 00:02:58.174 21:53:30 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:58.174 21:53:30 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:58.174 21:53:30 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:58.174 21:53:30 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:58.174 21:53:30 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.174 21:53:30 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:58.174 21:53:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:58.174 21:53:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.174 21:53:30 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:58.174 21:53:30 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:58.174 21:53:30 -- pm/common@17 -- $ local monitor 00:02:58.174 21:53:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.174 21:53:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.174 21:53:30 -- pm/common@25 -- $ sleep 1 00:02:58.174 21:53:30 -- pm/common@21 -- $ date +%s 00:02:58.174 21:53:30 -- pm/common@21 -- $ date +%s 00:02:58.174 21:53:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733522010 00:02:58.174 21:53:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733522010 00:02:58.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733522010_collect-cpu-load.pm.log 00:02:58.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733522010_collect-vmstat.pm.log 00:02:59.108 21:53:31 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:59.108 21:53:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.108 21:53:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.108 21:53:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.108 21:53:31 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.108 Fri Dec 6 09:53:31 PM UTC 2024 00:02:59.108 21:53:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.108 v25.01-pre-310-g0f59982b6 00:02:59.108 21:53:31 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.108 21:53:31 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.108 21:53:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:59.108 21:53:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:59.108 21:53:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.108 ************************************ 00:02:59.108 START TEST asan 00:02:59.108 ************************************ 00:02:59.108 using asan 00:02:59.108 21:53:31 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:59.108 00:02:59.108 real 0m0.000s 00:02:59.108 user 0m0.000s 00:02:59.108 sys 0m0.000s 00:02:59.108 21:53:31 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:59.108 ************************************ 00:02:59.108 END TEST asan 00:02:59.108 ************************************ 00:02:59.108 21:53:31 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.108 21:53:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.108 21:53:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.108 21:53:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:59.108 21:53:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:59.108 21:53:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.108 ************************************ 00:02:59.108 START TEST ubsan 00:02:59.108 ************************************ 00:02:59.108 using ubsan 00:02:59.108 21:53:31 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:59.108 00:02:59.108 real 0m0.000s 00:02:59.108 user 0m0.000s 00:02:59.108 sys 0m0.000s 00:02:59.108 21:53:31 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:59.108 21:53:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.108 ************************************ 00:02:59.108 END TEST ubsan 00:02:59.108 ************************************ 00:02:59.108 21:53:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.108 21:53:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.108 21:53:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.108 21:53:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:59.366 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.366 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.624 Using 'verbs' RDMA provider 00:03:10.523 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:22.754 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:22.754 Creating mk/config.mk...done. 00:03:22.754 Creating mk/cc.flags.mk...done. 00:03:22.754 Type 'make' to build. 00:03:22.754 21:53:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:22.754 21:53:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:22.754 21:53:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:22.754 21:53:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.754 ************************************ 00:03:22.754 START TEST make 00:03:22.754 ************************************ 00:03:22.754 21:53:53 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:22.754 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:22.754 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:22.754 meson setup builddir \ 00:03:22.754 -Dwith-libaio=enabled \ 00:03:22.754 -Dwith-liburing=enabled \ 00:03:22.754 -Dwith-libvfn=disabled \ 00:03:22.754 -Dwith-spdk=disabled \ 00:03:22.754 -Dexamples=false \ 00:03:22.754 -Dtests=false \ 00:03:22.754 -Dtools=false && \ 00:03:22.754 meson compile -C builddir && \ 00:03:22.754 cd -) 00:03:22.754 make[1]: Nothing to be done for 'all'. 00:03:23.721 The Meson build system 00:03:23.721 Version: 1.5.0 00:03:23.721 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:23.721 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:23.721 Build type: native build 00:03:23.721 Project name: xnvme 00:03:23.721 Project version: 0.7.5 00:03:23.721 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:23.721 C linker for the host machine: cc ld.bfd 2.40-14 00:03:23.721 Host machine cpu family: x86_64 00:03:23.721 Host machine cpu: x86_64 00:03:23.721 Message: host_machine.system: linux 00:03:23.721 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:23.721 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:23.721 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:23.721 Run-time dependency threads found: YES 00:03:23.721 Has header "setupapi.h" : NO 00:03:23.721 Has header "linux/blkzoned.h" : YES 00:03:23.721 Has header "linux/blkzoned.h" : YES (cached) 00:03:23.721 Has header "libaio.h" : YES 00:03:23.721 Library aio found: YES 00:03:23.721 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:23.721 Run-time dependency liburing found: YES 2.2 00:03:23.721 Dependency libvfn skipped: feature with-libvfn disabled 00:03:23.721 Found CMake: /usr/bin/cmake (3.27.7) 00:03:23.721 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:23.721 Subproject spdk : skipped: feature with-spdk disabled 00:03:23.721 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.721 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.721 Library rt found: YES 00:03:23.721 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:23.721 Configuring xnvme_config.h using configuration 00:03:23.721 Configuring xnvme.spec using configuration 00:03:23.721 Run-time dependency bash-completion found: YES 2.11 00:03:23.721 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:23.721 Program cp found: YES (/usr/bin/cp) 00:03:23.721 Build targets in project: 3 00:03:23.721 00:03:23.721 xnvme 0.7.5 00:03:23.721 00:03:23.721 Subprojects 00:03:23.721 spdk : NO Feature 'with-spdk' disabled 00:03:23.721 00:03:23.721 User defined options 00:03:23.721 examples : false 00:03:23.721 tests : false 00:03:23.721 tools : false 00:03:23.721 with-libaio : enabled 00:03:23.721 with-liburing: enabled 00:03:23.721 with-libvfn : disabled 00:03:23.721 with-spdk : disabled 00:03:23.721 00:03:23.721 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:23.981 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:23.981 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:23.981 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:23.981 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:23.981 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:23.981 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:23.981 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:23.981 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:23.981 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:23.981 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:24.241 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:24.241 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:24.241 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:24.241 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:24.241 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:24.241 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:24.241 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:24.241 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:24.241 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:24.241 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:24.241 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:24.241 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:24.241 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:24.241 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:24.241 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:24.241 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:24.241 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:24.241 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:24.241 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:24.241 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:24.241 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:24.241 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:24.241 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:24.241 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:24.241 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:24.241 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:24.241 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:24.241 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:24.241 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:24.500 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:24.500 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:24.500 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:24.500 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:24.500 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:24.500 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:24.500 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:24.500 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:24.500 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:24.500 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:24.500 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:24.500 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:24.500 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:24.500 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:24.500 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:24.500 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:24.500 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:24.500 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:24.500 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:24.500 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:24.500 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:24.500 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:24.500 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:24.500 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:24.500 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:24.500 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:24.500 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:24.500 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:24.761 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:24.761 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:24.761 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:24.761 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:24.761 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:24.761 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:24.761 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:25.021 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:25.021 [75/76] Linking static target lib/libxnvme.a 00:03:25.284 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:25.285 INFO: autodetecting backend as ninja 00:03:25.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:25.285 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:31.871 The Meson build system 00:03:31.871 Version: 1.5.0 00:03:31.871 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:31.871 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:31.871 Build type: native build 00:03:31.871 Program cat found: YES (/usr/bin/cat) 00:03:31.871 Project name: DPDK 00:03:31.871 Project version: 24.03.0 00:03:31.871 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.871 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.871 Host machine cpu family: x86_64 00:03:31.871 Host machine cpu: x86_64 00:03:31.871 Message: ## Building in Developer Mode ## 00:03:31.871 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:31.871 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:31.871 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:31.871 Program python3 found: YES (/usr/bin/python3) 00:03:31.871 Program cat found: YES (/usr/bin/cat) 00:03:31.871 Compiler for C supports arguments -march=native: YES 00:03:31.871 Checking for size of "void *" : 8 00:03:31.871 Checking for size of "void *" : 8 (cached) 00:03:31.871 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:31.871 Library m found: YES 00:03:31.871 Library numa found: YES 00:03:31.871 Has header "numaif.h" : YES 00:03:31.871 Library fdt found: NO 00:03:31.871 Library execinfo found: NO 00:03:31.871 Has header "execinfo.h" : YES 00:03:31.871 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.871 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:31.871 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:31.871 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:31.871 Run-time dependency openssl found: YES 3.1.1 00:03:31.871 Run-time dependency libpcap found: YES 1.10.4 00:03:31.871 Has header "pcap.h" with dependency libpcap: YES 00:03:31.871 Compiler for C supports arguments -Wcast-qual: YES 00:03:31.871 Compiler for C supports arguments -Wdeprecated: YES 00:03:31.871 Compiler for C supports arguments -Wformat: YES 00:03:31.871 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:31.871 Compiler for C supports arguments -Wformat-security: NO 00:03:31.871 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.871 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:31.871 Compiler for C supports arguments -Wnested-externs: YES 00:03:31.871 Compiler for C supports arguments -Wold-style-definition: YES 00:03:31.871 Compiler for C supports arguments -Wpointer-arith: YES 00:03:31.871 Compiler for C supports arguments -Wsign-compare: YES 00:03:31.871 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:31.871 Compiler for C supports arguments -Wundef: YES 00:03:31.871 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.871 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:31.871 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:31.871 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.871 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:31.871 Program objdump found: YES (/usr/bin/objdump) 00:03:31.871 Compiler for C supports arguments -mavx512f: YES 00:03:31.872 Checking if "AVX512 checking" compiles: YES 00:03:31.872 Fetching value of define "__SSE4_2__" : 1 00:03:31.872 Fetching value of define "__AES__" : 1 00:03:31.872 Fetching value of define "__AVX__" : 1 00:03:31.872 Fetching value of define "__AVX2__" : 1 00:03:31.872 Fetching value of define "__AVX512BW__" : 1 00:03:31.872 Fetching value of define "__AVX512CD__" : 1 00:03:31.872 Fetching value of define "__AVX512DQ__" : 1 00:03:31.872 Fetching value of define "__AVX512F__" : 1 00:03:31.872 Fetching value of define "__AVX512VL__" : 1 00:03:31.872 Fetching value of define "__PCLMUL__" : 1 00:03:31.872 Fetching value of define "__RDRND__" : 1 00:03:31.872 Fetching value of define "__RDSEED__" : 1 00:03:31.872 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:31.872 Fetching value of define "__znver1__" : (undefined) 00:03:31.872 Fetching value of define "__znver2__" : (undefined) 00:03:31.872 Fetching value of define "__znver3__" : (undefined) 00:03:31.872 Fetching value of define "__znver4__" : (undefined) 00:03:31.872 Library asan found: YES 00:03:31.872 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:31.872 Message: lib/log: Defining dependency "log" 00:03:31.872 Message: lib/kvargs: Defining dependency "kvargs" 00:03:31.872 Message: lib/telemetry: Defining dependency "telemetry" 00:03:31.872 Library rt found: YES 00:03:31.872 Checking for function "getentropy" : NO 00:03:31.872 Message: lib/eal: Defining dependency "eal" 00:03:31.872 Message: lib/ring: Defining dependency "ring" 00:03:31.872 Message: lib/rcu: Defining dependency "rcu" 00:03:31.872 Message: lib/mempool: Defining dependency "mempool" 00:03:31.872 Message: lib/mbuf: Defining dependency "mbuf" 00:03:31.872 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:31.872 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:31.872 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:31.872 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:31.872 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:31.872 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:31.872 Compiler for C supports arguments -mpclmul: YES 00:03:31.872 Compiler for C supports arguments -maes: YES 00:03:31.872 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:31.872 Compiler for C supports arguments -mavx512bw: YES 00:03:31.872 Compiler for C supports arguments -mavx512dq: YES 00:03:31.872 Compiler for C supports arguments -mavx512vl: YES 00:03:31.872 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:31.872 Compiler for C supports arguments -mavx2: YES 00:03:31.872 Compiler for C supports arguments -mavx: YES 00:03:31.872 Message: lib/net: Defining dependency "net" 00:03:31.872 Message: lib/meter: Defining dependency "meter" 00:03:31.872 Message: lib/ethdev: Defining dependency "ethdev" 00:03:31.872 Message: lib/pci: Defining dependency "pci" 00:03:31.872 Message: lib/cmdline: Defining dependency "cmdline" 00:03:31.872 Message: lib/hash: Defining dependency "hash" 00:03:31.872 Message: lib/timer: Defining dependency "timer" 00:03:31.872 Message: lib/compressdev: Defining dependency "compressdev" 00:03:31.872 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:31.872 Message: lib/dmadev: Defining dependency "dmadev" 00:03:31.872 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:31.872 Message: lib/power: Defining dependency "power" 00:03:31.872 Message: lib/reorder: Defining dependency "reorder" 00:03:31.872 Message: lib/security: Defining dependency "security" 00:03:31.872 Has header "linux/userfaultfd.h" : YES 00:03:31.872 Has header "linux/vduse.h" : YES 00:03:31.872 Message: lib/vhost: Defining dependency "vhost" 00:03:31.872 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:31.872 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:31.872 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:31.872 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:31.872 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:31.872 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:31.872 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:31.872 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:31.872 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:31.872 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:31.872 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:31.872 Configuring doxy-api-html.conf using configuration 00:03:31.872 Configuring doxy-api-man.conf using configuration 00:03:31.872 Program mandb found: YES (/usr/bin/mandb) 00:03:31.872 Program sphinx-build found: NO 00:03:31.872 Configuring rte_build_config.h using configuration 00:03:31.872 Message: 00:03:31.872 ================= 00:03:31.872 Applications Enabled 00:03:31.872 ================= 00:03:31.872 00:03:31.872 apps: 00:03:31.872 00:03:31.872 00:03:31.872 Message: 00:03:31.872 ================= 00:03:31.872 Libraries Enabled 00:03:31.872 ================= 00:03:31.872 00:03:31.872 libs: 00:03:31.872 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:31.872 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:31.872 cryptodev, dmadev, power, reorder, security, vhost, 00:03:31.872 00:03:31.872 Message: 00:03:31.872 =============== 00:03:31.872 Drivers Enabled 00:03:31.872 =============== 00:03:31.872 00:03:31.872 common: 00:03:31.872 00:03:31.872 bus: 00:03:31.872 pci, vdev, 00:03:31.872 mempool: 00:03:31.872 ring, 00:03:31.872 dma: 00:03:31.872 00:03:31.872 net: 00:03:31.872 00:03:31.872 crypto: 00:03:31.872 00:03:31.872 compress: 00:03:31.872 00:03:31.872 vdpa: 00:03:31.872 00:03:31.872 00:03:31.872 Message: 00:03:31.872 ================= 00:03:31.872 Content Skipped 00:03:31.872 ================= 00:03:31.872 00:03:31.872 apps: 00:03:31.872 dumpcap: explicitly disabled via build config 00:03:31.872 graph: explicitly disabled via build config 00:03:31.872 pdump: explicitly disabled via build config 00:03:31.872 proc-info: explicitly disabled via build config 00:03:31.872 test-acl: explicitly disabled via build config 00:03:31.872 test-bbdev: explicitly disabled via build config 00:03:31.872 test-cmdline: explicitly disabled via build config 00:03:31.872 test-compress-perf: explicitly disabled via build config 00:03:31.872 test-crypto-perf: explicitly disabled via build config 00:03:31.872 test-dma-perf: explicitly disabled via build config 00:03:31.872 test-eventdev: explicitly disabled via build config 00:03:31.872 test-fib: explicitly disabled via build config 00:03:31.872 test-flow-perf: explicitly disabled via build config 00:03:31.872 test-gpudev: explicitly disabled via build config 00:03:31.872 test-mldev: explicitly disabled via build config 00:03:31.872 test-pipeline: explicitly disabled via build config 00:03:31.872 test-pmd: explicitly disabled via build config 00:03:31.872 test-regex: explicitly disabled via build config 00:03:31.872 test-sad: explicitly disabled via build config 00:03:31.872 test-security-perf: explicitly disabled via build config 00:03:31.872 00:03:31.872 libs: 00:03:31.872 argparse: explicitly disabled via build config 00:03:31.872 metrics: explicitly disabled via build config 00:03:31.872 acl: explicitly disabled via build config 00:03:31.872 bbdev: explicitly disabled via build config 00:03:31.872 bitratestats: explicitly disabled via build config 00:03:31.872 bpf: explicitly disabled via build config 00:03:31.872 cfgfile: explicitly disabled via build config 00:03:31.872 distributor: explicitly disabled via build config 00:03:31.872 efd: explicitly disabled via build config 00:03:31.872 eventdev: explicitly disabled via build config 00:03:31.872 dispatcher: explicitly disabled via build config 00:03:31.872 gpudev: explicitly disabled via build config 00:03:31.872 gro: explicitly disabled via build config 00:03:31.872 gso: explicitly disabled via build config 00:03:31.872 ip_frag: explicitly disabled via build config 00:03:31.872 jobstats: explicitly disabled via build config 00:03:31.872 latencystats: explicitly disabled via build config 00:03:31.872 lpm: explicitly disabled via build config 00:03:31.872 member: explicitly disabled via build config 00:03:31.872 pcapng: explicitly disabled via build config 00:03:31.872 rawdev: explicitly disabled via build config 00:03:31.872 regexdev: explicitly disabled via build config 00:03:31.872 mldev: explicitly disabled via build config 00:03:31.872 rib: explicitly disabled via build config 00:03:31.872 sched: explicitly disabled via build config 00:03:31.872 stack: explicitly disabled via build config 00:03:31.872 ipsec: explicitly disabled via build config 00:03:31.872 pdcp: explicitly disabled via build config 00:03:31.872 fib: explicitly disabled via build config 00:03:31.872 port: explicitly disabled via build config 00:03:31.872 pdump: explicitly disabled via build config 00:03:31.872 table: explicitly disabled via build config 00:03:31.872 pipeline: explicitly disabled via build config 00:03:31.872 graph: explicitly disabled via build config 00:03:31.872 node: explicitly disabled via build config 00:03:31.872 00:03:31.872 drivers: 00:03:31.872 common/cpt: not in enabled drivers build config 00:03:31.872 common/dpaax: not in enabled drivers build config 00:03:31.872 common/iavf: not in enabled drivers build config 00:03:31.872 common/idpf: not in enabled drivers build config 00:03:31.872 common/ionic: not in enabled drivers build config 00:03:31.872 common/mvep: not in enabled drivers build config 00:03:31.872 common/octeontx: not in enabled drivers build config 00:03:31.872 bus/auxiliary: not in enabled drivers build config 00:03:31.872 bus/cdx: not in enabled drivers build config 00:03:31.872 bus/dpaa: not in enabled drivers build config 00:03:31.872 bus/fslmc: not in enabled drivers build config 00:03:31.872 bus/ifpga: not in enabled drivers build config 00:03:31.872 bus/platform: not in enabled drivers build config 00:03:31.872 bus/uacce: not in enabled drivers build config 00:03:31.872 bus/vmbus: not in enabled drivers build config 00:03:31.872 common/cnxk: not in enabled drivers build config 00:03:31.872 common/mlx5: not in enabled drivers build config 00:03:31.872 common/nfp: not in enabled drivers build config 00:03:31.872 common/nitrox: not in enabled drivers build config 00:03:31.872 common/qat: not in enabled drivers build config 00:03:31.872 common/sfc_efx: not in enabled drivers build config 00:03:31.872 mempool/bucket: not in enabled drivers build config 00:03:31.872 mempool/cnxk: not in enabled drivers build config 00:03:31.872 mempool/dpaa: not in enabled drivers build config 00:03:31.872 mempool/dpaa2: not in enabled drivers build config 00:03:31.872 mempool/octeontx: not in enabled drivers build config 00:03:31.872 mempool/stack: not in enabled drivers build config 00:03:31.872 dma/cnxk: not in enabled drivers build config 00:03:31.872 dma/dpaa: not in enabled drivers build config 00:03:31.872 dma/dpaa2: not in enabled drivers build config 00:03:31.872 dma/hisilicon: not in enabled drivers build config 00:03:31.872 dma/idxd: not in enabled drivers build config 00:03:31.872 dma/ioat: not in enabled drivers build config 00:03:31.872 dma/skeleton: not in enabled drivers build config 00:03:31.872 net/af_packet: not in enabled drivers build config 00:03:31.872 net/af_xdp: not in enabled drivers build config 00:03:31.872 net/ark: not in enabled drivers build config 00:03:31.872 net/atlantic: not in enabled drivers build config 00:03:31.872 net/avp: not in enabled drivers build config 00:03:31.872 net/axgbe: not in enabled drivers build config 00:03:31.872 net/bnx2x: not in enabled drivers build config 00:03:31.872 net/bnxt: not in enabled drivers build config 00:03:31.872 net/bonding: not in enabled drivers build config 00:03:31.872 net/cnxk: not in enabled drivers build config 00:03:31.872 net/cpfl: not in enabled drivers build config 00:03:31.872 net/cxgbe: not in enabled drivers build config 00:03:31.872 net/dpaa: not in enabled drivers build config 00:03:31.872 net/dpaa2: not in enabled drivers build config 00:03:31.872 net/e1000: not in enabled drivers build config 00:03:31.872 net/ena: not in enabled drivers build config 00:03:31.872 net/enetc: not in enabled drivers build config 00:03:31.872 net/enetfec: not in enabled drivers build config 00:03:31.872 net/enic: not in enabled drivers build config 00:03:31.872 net/failsafe: not in enabled drivers build config 00:03:31.872 net/fm10k: not in enabled drivers build config 00:03:31.872 net/gve: not in enabled drivers build config 00:03:31.872 net/hinic: not in enabled drivers build config 00:03:31.872 net/hns3: not in enabled drivers build config 00:03:31.872 net/i40e: not in enabled drivers build config 00:03:31.872 net/iavf: not in enabled drivers build config 00:03:31.872 net/ice: not in enabled drivers build config 00:03:31.872 net/idpf: not in enabled drivers build config 00:03:31.872 net/igc: not in enabled drivers build config 00:03:31.872 net/ionic: not in enabled drivers build config 00:03:31.872 net/ipn3ke: not in enabled drivers build config 00:03:31.872 net/ixgbe: not in enabled drivers build config 00:03:31.872 net/mana: not in enabled drivers build config 00:03:31.872 net/memif: not in enabled drivers build config 00:03:31.872 net/mlx4: not in enabled drivers build config 00:03:31.872 net/mlx5: not in enabled drivers build config 00:03:31.872 net/mvneta: not in enabled drivers build config 00:03:31.872 net/mvpp2: not in enabled drivers build config 00:03:31.872 net/netvsc: not in enabled drivers build config 00:03:31.872 net/nfb: not in enabled drivers build config 00:03:31.872 net/nfp: not in enabled drivers build config 00:03:31.872 net/ngbe: not in enabled drivers build config 00:03:31.872 net/null: not in enabled drivers build config 00:03:31.872 net/octeontx: not in enabled drivers build config 00:03:31.872 net/octeon_ep: not in enabled drivers build config 00:03:31.872 net/pcap: not in enabled drivers build config 00:03:31.872 net/pfe: not in enabled drivers build config 00:03:31.872 net/qede: not in enabled drivers build config 00:03:31.872 net/ring: not in enabled drivers build config 00:03:31.872 net/sfc: not in enabled drivers build config 00:03:31.872 net/softnic: not in enabled drivers build config 00:03:31.872 net/tap: not in enabled drivers build config 00:03:31.872 net/thunderx: not in enabled drivers build config 00:03:31.872 net/txgbe: not in enabled drivers build config 00:03:31.872 net/vdev_netvsc: not in enabled drivers build config 00:03:31.872 net/vhost: not in enabled drivers build config 00:03:31.872 net/virtio: not in enabled drivers build config 00:03:31.872 net/vmxnet3: not in enabled drivers build config 00:03:31.872 raw/*: missing internal dependency, "rawdev" 00:03:31.872 crypto/armv8: not in enabled drivers build config 00:03:31.872 crypto/bcmfs: not in enabled drivers build config 00:03:31.872 crypto/caam_jr: not in enabled drivers build config 00:03:31.872 crypto/ccp: not in enabled drivers build config 00:03:31.872 crypto/cnxk: not in enabled drivers build config 00:03:31.872 crypto/dpaa_sec: not in enabled drivers build config 00:03:31.872 crypto/dpaa2_sec: not in enabled drivers build config 00:03:31.872 crypto/ipsec_mb: not in enabled drivers build config 00:03:31.872 crypto/mlx5: not in enabled drivers build config 00:03:31.872 crypto/mvsam: not in enabled drivers build config 00:03:31.872 crypto/nitrox: not in enabled drivers build config 00:03:31.872 crypto/null: not in enabled drivers build config 00:03:31.872 crypto/octeontx: not in enabled drivers build config 00:03:31.872 crypto/openssl: not in enabled drivers build config 00:03:31.872 crypto/scheduler: not in enabled drivers build config 00:03:31.872 crypto/uadk: not in enabled drivers build config 00:03:31.872 crypto/virtio: not in enabled drivers build config 00:03:31.872 compress/isal: not in enabled drivers build config 00:03:31.872 compress/mlx5: not in enabled drivers build config 00:03:31.872 compress/nitrox: not in enabled drivers build config 00:03:31.872 compress/octeontx: not in enabled drivers build config 00:03:31.872 compress/zlib: not in enabled drivers build config 00:03:31.872 regex/*: missing internal dependency, "regexdev" 00:03:31.872 ml/*: missing internal dependency, "mldev" 00:03:31.872 vdpa/ifc: not in enabled drivers build config 00:03:31.872 vdpa/mlx5: not in enabled drivers build config 00:03:31.872 vdpa/nfp: not in enabled drivers build config 00:03:31.872 vdpa/sfc: not in enabled drivers build config 00:03:31.872 event/*: missing internal dependency, "eventdev" 00:03:31.872 baseband/*: missing internal dependency, "bbdev" 00:03:31.872 gpu/*: missing internal dependency, "gpudev" 00:03:31.872 00:03:31.872 00:03:32.132 Build targets in project: 84 00:03:32.132 00:03:32.132 DPDK 24.03.0 00:03:32.132 00:03:32.132 User defined options 00:03:32.132 buildtype : debug 00:03:32.132 default_library : shared 00:03:32.132 libdir : lib 00:03:32.132 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:32.132 b_sanitize : address 00:03:32.132 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:32.132 c_link_args : 00:03:32.132 cpu_instruction_set: native 00:03:32.132 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:32.132 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:32.132 enable_docs : false 00:03:32.132 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:32.132 enable_kmods : false 00:03:32.132 max_lcores : 128 00:03:32.132 tests : false 00:03:32.132 00:03:32.132 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.704 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:32.704 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:32.704 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:32.704 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:32.704 [4/267] Linking static target lib/librte_log.a 00:03:32.704 [5/267] Linking static target lib/librte_kvargs.a 00:03:32.704 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:32.964 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:32.964 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:32.964 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:32.964 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:32.964 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:33.224 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:33.224 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:33.224 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:33.224 [15/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.224 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:33.224 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:33.224 [18/267] Linking static target lib/librte_telemetry.a 00:03:33.483 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:33.483 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.483 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:33.483 [22/267] Linking target lib/librte_log.so.24.1 00:03:33.483 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:33.483 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:33.740 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:33.740 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:33.740 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:33.740 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:33.740 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:33.740 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:33.999 [31/267] Linking target lib/librte_kvargs.so.24.1 00:03:33.999 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:33.999 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:33.999 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:33.999 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:33.999 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:33.999 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:33.999 [38/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.999 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:33.999 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:34.262 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:34.262 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:34.262 [43/267] Linking target lib/librte_telemetry.so.24.1 00:03:34.262 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:34.262 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:34.262 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:34.262 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:34.556 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:34.556 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:34.556 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:34.556 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:34.556 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:34.556 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:34.556 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:34.816 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:34.816 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:34.816 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:34.816 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:34.816 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:34.816 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:35.076 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:35.076 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:35.076 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:35.076 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:35.076 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:35.076 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:35.076 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:35.076 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:35.336 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:35.336 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:35.596 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:35.596 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:35.596 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:35.596 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:35.596 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:35.596 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:35.596 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:35.596 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:35.856 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:35.856 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:35.856 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.856 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:36.116 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:36.116 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:36.116 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:36.116 [86/267] Linking static target lib/librte_ring.a 00:03:36.116 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:36.116 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:36.116 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:36.116 [90/267] Linking static target lib/librte_eal.a 00:03:36.116 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:36.378 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:36.378 [93/267] Linking static target lib/librte_mempool.a 00:03:36.378 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:36.378 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:36.378 [96/267] Linking static target lib/librte_rcu.a 00:03:36.378 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:36.378 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.378 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:36.378 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:36.640 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:36.641 [102/267] Linking static target lib/librte_mbuf.a 00:03:36.641 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:36.641 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:36.641 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:36.641 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.903 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:36.903 [108/267] Linking static target lib/librte_net.a 00:03:36.903 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:36.903 [110/267] Linking static target lib/librte_meter.a 00:03:36.903 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:36.903 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:36.903 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:37.163 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:37.163 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.164 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.164 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.424 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:37.424 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:37.424 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:37.424 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.686 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.686 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:37.686 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:37.686 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:37.686 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:37.686 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:37.686 [128/267] Linking static target lib/librte_pci.a 00:03:37.947 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:37.947 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:37.947 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:37.947 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:37.947 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:37.947 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:37.947 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:37.947 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:37.947 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:38.208 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:38.208 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.208 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:38.208 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:38.208 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.209 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:38.209 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.209 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:38.209 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:38.209 [147/267] Linking static target lib/librte_cmdline.a 00:03:38.469 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:38.469 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:38.469 [150/267] Linking static target lib/librte_timer.a 00:03:38.469 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:38.469 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:38.469 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:38.731 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:38.731 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:38.731 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:38.993 [157/267] Linking static target lib/librte_compressdev.a 00:03:38.993 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:38.993 [159/267] Linking static target lib/librte_hash.a 00:03:38.993 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.993 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:38.993 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:38.993 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:38.993 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:38.993 [165/267] Linking static target lib/librte_dmadev.a 00:03:39.255 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:39.255 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:39.255 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.516 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:39.516 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.516 [171/267] Linking static target lib/librte_ethdev.a 00:03:39.516 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:39.516 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.516 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.777 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:39.777 [176/267] Linking static target lib/librte_cryptodev.a 00:03:39.777 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:39.777 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:39.777 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:39.777 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.777 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.777 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:39.777 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.039 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.039 [185/267] Linking static target lib/librte_power.a 00:03:40.039 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:40.039 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:40.298 [188/267] Linking static target lib/librte_reorder.a 00:03:40.298 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:40.298 [190/267] Linking static target lib/librte_security.a 00:03:40.298 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:40.298 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:40.556 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:40.556 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.868 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.868 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.868 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:40.868 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:41.140 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:41.140 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:41.140 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:41.140 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:41.140 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.399 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:41.399 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:41.399 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:41.399 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:41.399 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.658 [209/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:41.658 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:41.658 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:41.658 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:41.658 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.658 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.658 [215/267] Linking static target drivers/librte_bus_vdev.a 00:03:41.658 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:41.658 [217/267] Linking static target drivers/librte_bus_pci.a 00:03:41.658 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:41.915 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:41.915 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:41.915 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.915 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:41.915 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.915 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.915 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:42.172 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.735 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:43.667 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.667 [229/267] Linking target lib/librte_eal.so.24.1 00:03:43.667 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:43.667 [231/267] Linking target lib/librte_timer.so.24.1 00:03:43.667 [232/267] Linking target lib/librte_meter.so.24.1 00:03:43.667 [233/267] Linking target lib/librte_ring.so.24.1 00:03:43.667 [234/267] Linking target lib/librte_pci.so.24.1 00:03:43.667 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:43.667 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:43.924 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:43.924 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:43.924 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:43.924 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:43.924 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:43.924 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:43.924 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:43.924 [244/267] Linking target lib/librte_rcu.so.24.1 00:03:43.924 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:43.924 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:43.924 [247/267] Linking target lib/librte_mbuf.so.24.1 00:03:43.924 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.182 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.182 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:44.182 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:44.182 [252/267] Linking target lib/librte_net.so.24.1 00:03:44.182 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:44.182 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:44.182 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:44.439 [256/267] Linking target lib/librte_hash.so.24.1 00:03:44.439 [257/267] Linking target lib/librte_security.so.24.1 00:03:44.439 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:44.439 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:44.697 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.697 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:44.697 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:44.955 [263/267] Linking target lib/librte_power.so.24.1 00:03:45.213 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:45.213 [265/267] Linking static target lib/librte_vhost.a 00:03:46.144 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.403 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:46.403 INFO: autodetecting backend as ninja 00:03:46.403 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:01.339 CC lib/log/log_deprecated.o 00:04:01.339 CC lib/log/log_flags.o 00:04:01.339 CC lib/log/log.o 00:04:01.339 CC lib/ut_mock/mock.o 00:04:01.339 CC lib/ut/ut.o 00:04:01.339 LIB libspdk_ut.a 00:04:01.339 LIB libspdk_log.a 00:04:01.339 LIB libspdk_ut_mock.a 00:04:01.339 SO libspdk_ut.so.2.0 00:04:01.339 SO libspdk_ut_mock.so.6.0 00:04:01.339 SO libspdk_log.so.7.1 00:04:01.339 SYMLINK libspdk_ut.so 00:04:01.339 SYMLINK libspdk_ut_mock.so 00:04:01.339 SYMLINK libspdk_log.so 00:04:01.339 CC lib/util/base64.o 00:04:01.339 CC lib/util/bit_array.o 00:04:01.339 CC lib/util/crc16.o 00:04:01.339 CC lib/util/crc32.o 00:04:01.339 CC lib/util/cpuset.o 00:04:01.339 CXX lib/trace_parser/trace.o 00:04:01.339 CC lib/util/crc32c.o 00:04:01.339 CC lib/dma/dma.o 00:04:01.339 CC lib/ioat/ioat.o 00:04:01.339 CC lib/vfio_user/host/vfio_user_pci.o 00:04:01.339 CC lib/util/crc32_ieee.o 00:04:01.339 CC lib/util/crc64.o 00:04:01.339 CC lib/util/dif.o 00:04:01.339 LIB libspdk_dma.a 00:04:01.339 CC lib/util/fd.o 00:04:01.339 SO libspdk_dma.so.5.0 00:04:01.339 CC lib/vfio_user/host/vfio_user.o 00:04:01.339 CC lib/util/fd_group.o 00:04:01.339 CC lib/util/file.o 00:04:01.339 SYMLINK libspdk_dma.so 00:04:01.339 CC lib/util/hexlify.o 00:04:01.339 CC lib/util/iov.o 00:04:01.339 LIB libspdk_ioat.a 00:04:01.339 SO libspdk_ioat.so.7.0 00:04:01.339 CC lib/util/math.o 00:04:01.339 CC lib/util/net.o 00:04:01.339 CC lib/util/pipe.o 00:04:01.339 SYMLINK libspdk_ioat.so 00:04:01.339 CC lib/util/strerror_tls.o 00:04:01.339 LIB libspdk_vfio_user.a 00:04:01.339 SO libspdk_vfio_user.so.5.0 00:04:01.339 CC lib/util/string.o 00:04:01.339 SYMLINK libspdk_vfio_user.so 00:04:01.339 CC lib/util/uuid.o 00:04:01.339 CC lib/util/xor.o 00:04:01.339 CC lib/util/zipf.o 00:04:01.339 CC lib/util/md5.o 00:04:01.339 LIB libspdk_util.a 00:04:01.339 SO libspdk_util.so.10.1 00:04:01.339 LIB libspdk_trace_parser.a 00:04:01.339 SO libspdk_trace_parser.so.6.0 00:04:01.339 SYMLINK libspdk_util.so 00:04:01.339 SYMLINK libspdk_trace_parser.so 00:04:01.339 CC lib/conf/conf.o 00:04:01.339 CC lib/rdma_utils/rdma_utils.o 00:04:01.339 CC lib/idxd/idxd.o 00:04:01.339 CC lib/idxd/idxd_kernel.o 00:04:01.339 CC lib/idxd/idxd_user.o 00:04:01.339 CC lib/vmd/vmd.o 00:04:01.339 CC lib/vmd/led.o 00:04:01.339 CC lib/env_dpdk/env.o 00:04:01.339 CC lib/env_dpdk/memory.o 00:04:01.339 CC lib/json/json_parse.o 00:04:01.339 CC lib/env_dpdk/pci.o 00:04:01.339 CC lib/env_dpdk/init.o 00:04:01.339 CC lib/json/json_util.o 00:04:01.339 LIB libspdk_rdma_utils.a 00:04:01.340 LIB libspdk_conf.a 00:04:01.598 CC lib/env_dpdk/threads.o 00:04:01.598 SO libspdk_rdma_utils.so.1.0 00:04:01.598 SO libspdk_conf.so.6.0 00:04:01.598 SYMLINK libspdk_rdma_utils.so 00:04:01.598 CC lib/env_dpdk/pci_ioat.o 00:04:01.598 SYMLINK libspdk_conf.so 00:04:01.598 CC lib/env_dpdk/pci_virtio.o 00:04:01.598 CC lib/json/json_write.o 00:04:01.598 CC lib/env_dpdk/pci_vmd.o 00:04:01.598 CC lib/env_dpdk/pci_idxd.o 00:04:01.598 CC lib/env_dpdk/pci_event.o 00:04:01.855 CC lib/env_dpdk/sigbus_handler.o 00:04:01.855 CC lib/env_dpdk/pci_dpdk.o 00:04:01.855 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:01.855 LIB libspdk_idxd.a 00:04:01.855 LIB libspdk_vmd.a 00:04:01.855 CC lib/rdma_provider/common.o 00:04:01.855 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:01.855 SO libspdk_idxd.so.12.1 00:04:01.855 SO libspdk_vmd.so.6.0 00:04:01.855 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:01.855 LIB libspdk_json.a 00:04:01.855 SYMLINK libspdk_idxd.so 00:04:01.855 SO libspdk_json.so.6.0 00:04:01.855 SYMLINK libspdk_vmd.so 00:04:01.855 SYMLINK libspdk_json.so 00:04:02.112 LIB libspdk_rdma_provider.a 00:04:02.112 SO libspdk_rdma_provider.so.7.0 00:04:02.112 SYMLINK libspdk_rdma_provider.so 00:04:02.113 CC lib/jsonrpc/jsonrpc_server.o 00:04:02.113 CC lib/jsonrpc/jsonrpc_client.o 00:04:02.113 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:02.113 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:02.369 LIB libspdk_jsonrpc.a 00:04:02.369 SO libspdk_jsonrpc.so.6.0 00:04:02.625 SYMLINK libspdk_jsonrpc.so 00:04:02.625 CC lib/rpc/rpc.o 00:04:02.881 LIB libspdk_env_dpdk.a 00:04:02.881 SO libspdk_env_dpdk.so.15.1 00:04:02.881 LIB libspdk_rpc.a 00:04:02.881 SO libspdk_rpc.so.6.0 00:04:02.881 SYMLINK libspdk_env_dpdk.so 00:04:03.139 SYMLINK libspdk_rpc.so 00:04:03.139 CC lib/trace/trace.o 00:04:03.139 CC lib/notify/notify_rpc.o 00:04:03.139 CC lib/trace/trace_flags.o 00:04:03.139 CC lib/notify/notify.o 00:04:03.139 CC lib/trace/trace_rpc.o 00:04:03.139 CC lib/keyring/keyring.o 00:04:03.139 CC lib/keyring/keyring_rpc.o 00:04:03.396 LIB libspdk_notify.a 00:04:03.396 SO libspdk_notify.so.6.0 00:04:03.396 SYMLINK libspdk_notify.so 00:04:03.396 LIB libspdk_keyring.a 00:04:03.396 LIB libspdk_trace.a 00:04:03.396 SO libspdk_keyring.so.2.0 00:04:03.654 SO libspdk_trace.so.11.0 00:04:03.654 SYMLINK libspdk_keyring.so 00:04:03.654 SYMLINK libspdk_trace.so 00:04:03.654 CC lib/sock/sock.o 00:04:03.654 CC lib/sock/sock_rpc.o 00:04:03.911 CC lib/thread/thread.o 00:04:03.911 CC lib/thread/iobuf.o 00:04:04.169 LIB libspdk_sock.a 00:04:04.169 SO libspdk_sock.so.10.0 00:04:04.169 SYMLINK libspdk_sock.so 00:04:04.427 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.427 CC lib/nvme/nvme_fabric.o 00:04:04.427 CC lib/nvme/nvme_ns_cmd.o 00:04:04.427 CC lib/nvme/nvme_ctrlr.o 00:04:04.427 CC lib/nvme/nvme_ns.o 00:04:04.427 CC lib/nvme/nvme_pcie_common.o 00:04:04.427 CC lib/nvme/nvme_qpair.o 00:04:04.427 CC lib/nvme/nvme_pcie.o 00:04:04.427 CC lib/nvme/nvme.o 00:04:04.992 CC lib/nvme/nvme_quirks.o 00:04:04.992 CC lib/nvme/nvme_transport.o 00:04:04.992 CC lib/nvme/nvme_discovery.o 00:04:04.992 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.992 LIB libspdk_thread.a 00:04:05.250 SO libspdk_thread.so.11.0 00:04:05.250 SYMLINK libspdk_thread.so 00:04:05.250 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:05.250 CC lib/nvme/nvme_tcp.o 00:04:05.508 CC lib/nvme/nvme_opal.o 00:04:05.508 CC lib/nvme/nvme_io_msg.o 00:04:05.508 CC lib/nvme/nvme_poll_group.o 00:04:05.508 CC lib/nvme/nvme_zns.o 00:04:05.508 CC lib/nvme/nvme_stubs.o 00:04:05.508 CC lib/accel/accel.o 00:04:05.766 CC lib/nvme/nvme_auth.o 00:04:05.766 CC lib/nvme/nvme_cuse.o 00:04:05.766 CC lib/nvme/nvme_rdma.o 00:04:06.022 CC lib/accel/accel_rpc.o 00:04:06.022 CC lib/accel/accel_sw.o 00:04:06.022 CC lib/blob/blobstore.o 00:04:06.022 CC lib/blob/request.o 00:04:06.294 CC lib/init/json_config.o 00:04:06.294 CC lib/init/subsystem.o 00:04:06.294 CC lib/init/subsystem_rpc.o 00:04:06.552 CC lib/init/rpc.o 00:04:06.552 CC lib/blob/zeroes.o 00:04:06.552 LIB libspdk_accel.a 00:04:06.552 SO libspdk_accel.so.16.0 00:04:06.552 CC lib/virtio/virtio.o 00:04:06.552 CC lib/fsdev/fsdev.o 00:04:06.552 LIB libspdk_init.a 00:04:06.552 SYMLINK libspdk_accel.so 00:04:06.552 CC lib/fsdev/fsdev_io.o 00:04:06.552 CC lib/fsdev/fsdev_rpc.o 00:04:06.552 SO libspdk_init.so.6.0 00:04:06.552 CC lib/blob/blob_bs_dev.o 00:04:06.811 SYMLINK libspdk_init.so 00:04:06.811 CC lib/virtio/virtio_vhost_user.o 00:04:06.811 CC lib/virtio/virtio_vfio_user.o 00:04:06.811 CC lib/virtio/virtio_pci.o 00:04:06.811 CC lib/bdev/bdev.o 00:04:06.811 CC lib/bdev/bdev_rpc.o 00:04:06.811 CC lib/bdev/bdev_zone.o 00:04:07.069 CC lib/event/app.o 00:04:07.069 CC lib/event/reactor.o 00:04:07.069 CC lib/event/log_rpc.o 00:04:07.069 LIB libspdk_nvme.a 00:04:07.069 CC lib/event/app_rpc.o 00:04:07.069 LIB libspdk_virtio.a 00:04:07.069 CC lib/bdev/part.o 00:04:07.069 SO libspdk_virtio.so.7.0 00:04:07.069 LIB libspdk_fsdev.a 00:04:07.069 CC lib/event/scheduler_static.o 00:04:07.069 SO libspdk_fsdev.so.2.0 00:04:07.069 SO libspdk_nvme.so.15.0 00:04:07.069 SYMLINK libspdk_virtio.so 00:04:07.327 CC lib/bdev/scsi_nvme.o 00:04:07.327 SYMLINK libspdk_fsdev.so 00:04:07.327 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:07.327 SYMLINK libspdk_nvme.so 00:04:07.585 LIB libspdk_event.a 00:04:07.585 SO libspdk_event.so.14.0 00:04:07.585 SYMLINK libspdk_event.so 00:04:08.150 LIB libspdk_fuse_dispatcher.a 00:04:08.150 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.150 SYMLINK libspdk_fuse_dispatcher.so 00:04:09.083 LIB libspdk_blob.a 00:04:09.083 SO libspdk_blob.so.12.0 00:04:09.083 SYMLINK libspdk_blob.so 00:04:09.342 CC lib/lvol/lvol.o 00:04:09.342 CC lib/blobfs/blobfs.o 00:04:09.342 CC lib/blobfs/tree.o 00:04:09.601 LIB libspdk_bdev.a 00:04:09.601 SO libspdk_bdev.so.17.0 00:04:09.601 SYMLINK libspdk_bdev.so 00:04:09.859 CC lib/scsi/dev.o 00:04:09.859 CC lib/scsi/lun.o 00:04:09.859 CC lib/scsi/port.o 00:04:09.859 CC lib/scsi/scsi.o 00:04:09.859 CC lib/ublk/ublk.o 00:04:09.859 CC lib/nvmf/ctrlr.o 00:04:09.859 CC lib/nbd/nbd.o 00:04:09.859 CC lib/ftl/ftl_core.o 00:04:09.859 CC lib/nbd/nbd_rpc.o 00:04:10.116 CC lib/ublk/ublk_rpc.o 00:04:10.116 CC lib/scsi/scsi_bdev.o 00:04:10.116 CC lib/ftl/ftl_init.o 00:04:10.116 CC lib/ftl/ftl_layout.o 00:04:10.116 LIB libspdk_lvol.a 00:04:10.116 SO libspdk_lvol.so.11.0 00:04:10.116 LIB libspdk_blobfs.a 00:04:10.116 CC lib/ftl/ftl_debug.o 00:04:10.116 SO libspdk_blobfs.so.11.0 00:04:10.116 CC lib/ftl/ftl_io.o 00:04:10.116 SYMLINK libspdk_lvol.so 00:04:10.116 CC lib/ftl/ftl_sb.o 00:04:10.116 CC lib/nvmf/ctrlr_discovery.o 00:04:10.116 SYMLINK libspdk_blobfs.so 00:04:10.116 CC lib/nvmf/ctrlr_bdev.o 00:04:10.116 LIB libspdk_nbd.a 00:04:10.375 SO libspdk_nbd.so.7.0 00:04:10.375 CC lib/ftl/ftl_l2p.o 00:04:10.375 CC lib/scsi/scsi_pr.o 00:04:10.375 SYMLINK libspdk_nbd.so 00:04:10.375 CC lib/nvmf/subsystem.o 00:04:10.375 CC lib/ftl/ftl_l2p_flat.o 00:04:10.375 CC lib/nvmf/nvmf.o 00:04:10.375 LIB libspdk_ublk.a 00:04:10.375 CC lib/ftl/ftl_nv_cache.o 00:04:10.375 SO libspdk_ublk.so.3.0 00:04:10.633 CC lib/ftl/ftl_band.o 00:04:10.633 SYMLINK libspdk_ublk.so 00:04:10.633 CC lib/ftl/ftl_band_ops.o 00:04:10.633 CC lib/scsi/scsi_rpc.o 00:04:10.633 CC lib/ftl/ftl_writer.o 00:04:10.633 CC lib/scsi/task.o 00:04:10.633 CC lib/ftl/ftl_rq.o 00:04:10.633 LIB libspdk_scsi.a 00:04:10.897 SO libspdk_scsi.so.9.0 00:04:10.897 CC lib/ftl/ftl_reloc.o 00:04:10.897 CC lib/nvmf/nvmf_rpc.o 00:04:10.897 SYMLINK libspdk_scsi.so 00:04:10.897 CC lib/nvmf/transport.o 00:04:10.897 CC lib/ftl/ftl_l2p_cache.o 00:04:11.156 CC lib/ftl/ftl_p2l.o 00:04:11.156 CC lib/iscsi/conn.o 00:04:11.156 CC lib/vhost/vhost.o 00:04:11.156 CC lib/vhost/vhost_rpc.o 00:04:11.414 CC lib/nvmf/tcp.o 00:04:11.414 CC lib/ftl/ftl_p2l_log.o 00:04:11.414 CC lib/ftl/mngt/ftl_mngt.o 00:04:11.414 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:11.414 CC lib/iscsi/init_grp.o 00:04:11.414 CC lib/iscsi/iscsi.o 00:04:11.672 CC lib/iscsi/param.o 00:04:11.672 CC lib/iscsi/portal_grp.o 00:04:11.672 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:11.672 CC lib/nvmf/stubs.o 00:04:11.672 CC lib/vhost/vhost_scsi.o 00:04:11.672 CC lib/nvmf/mdns_server.o 00:04:11.672 CC lib/nvmf/rdma.o 00:04:11.672 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:11.931 CC lib/nvmf/auth.o 00:04:11.931 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:11.931 CC lib/vhost/vhost_blk.o 00:04:11.931 CC lib/vhost/rte_vhost_user.o 00:04:11.931 CC lib/iscsi/tgt_node.o 00:04:12.190 CC lib/iscsi/iscsi_subsystem.o 00:04:12.190 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.450 CC lib/iscsi/iscsi_rpc.o 00:04:12.450 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.450 CC lib/iscsi/task.o 00:04:12.450 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.450 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:12.707 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:12.707 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:12.707 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:12.707 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:12.707 CC lib/ftl/utils/ftl_conf.o 00:04:12.707 CC lib/ftl/utils/ftl_md.o 00:04:12.707 CC lib/ftl/utils/ftl_mempool.o 00:04:12.707 CC lib/ftl/utils/ftl_bitmap.o 00:04:12.707 LIB libspdk_iscsi.a 00:04:12.707 CC lib/ftl/utils/ftl_property.o 00:04:12.965 LIB libspdk_vhost.a 00:04:12.965 SO libspdk_iscsi.so.8.0 00:04:12.965 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:12.965 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:12.965 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:12.965 SO libspdk_vhost.so.8.0 00:04:12.965 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:12.965 SYMLINK libspdk_vhost.so 00:04:12.965 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:12.965 SYMLINK libspdk_iscsi.so 00:04:12.965 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:12.965 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:12.965 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:13.223 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:13.223 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:13.223 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:13.223 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:13.223 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:13.223 CC lib/ftl/base/ftl_base_dev.o 00:04:13.223 CC lib/ftl/base/ftl_base_bdev.o 00:04:13.223 CC lib/ftl/ftl_trace.o 00:04:13.481 LIB libspdk_ftl.a 00:04:13.739 SO libspdk_ftl.so.9.0 00:04:13.739 LIB libspdk_nvmf.a 00:04:13.739 SO libspdk_nvmf.so.20.0 00:04:13.997 SYMLINK libspdk_ftl.so 00:04:13.997 SYMLINK libspdk_nvmf.so 00:04:14.254 CC module/env_dpdk/env_dpdk_rpc.o 00:04:14.512 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:14.512 CC module/blob/bdev/blob_bdev.o 00:04:14.512 CC module/scheduler/gscheduler/gscheduler.o 00:04:14.512 CC module/keyring/file/keyring.o 00:04:14.512 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:14.512 CC module/sock/posix/posix.o 00:04:14.512 CC module/accel/error/accel_error.o 00:04:14.512 CC module/fsdev/aio/fsdev_aio.o 00:04:14.512 CC module/accel/ioat/accel_ioat.o 00:04:14.512 LIB libspdk_env_dpdk_rpc.a 00:04:14.512 SO libspdk_env_dpdk_rpc.so.6.0 00:04:14.512 CC module/keyring/file/keyring_rpc.o 00:04:14.512 LIB libspdk_scheduler_dpdk_governor.a 00:04:14.512 SYMLINK libspdk_env_dpdk_rpc.so 00:04:14.512 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:14.512 LIB libspdk_scheduler_gscheduler.a 00:04:14.512 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:14.512 SO libspdk_scheduler_gscheduler.so.4.0 00:04:14.512 LIB libspdk_scheduler_dynamic.a 00:04:14.512 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:14.512 SYMLINK libspdk_scheduler_gscheduler.so 00:04:14.512 CC module/accel/ioat/accel_ioat_rpc.o 00:04:14.512 SO libspdk_scheduler_dynamic.so.4.0 00:04:14.512 CC module/accel/error/accel_error_rpc.o 00:04:14.512 LIB libspdk_keyring_file.a 00:04:14.770 SO libspdk_keyring_file.so.2.0 00:04:14.770 CC module/fsdev/aio/linux_aio_mgr.o 00:04:14.770 LIB libspdk_blob_bdev.a 00:04:14.770 SYMLINK libspdk_scheduler_dynamic.so 00:04:14.770 SYMLINK libspdk_keyring_file.so 00:04:14.770 SO libspdk_blob_bdev.so.12.0 00:04:14.770 LIB libspdk_accel_error.a 00:04:14.770 LIB libspdk_accel_ioat.a 00:04:14.770 CC module/keyring/linux/keyring.o 00:04:14.770 CC module/keyring/linux/keyring_rpc.o 00:04:14.770 CC module/accel/dsa/accel_dsa.o 00:04:14.770 SYMLINK libspdk_blob_bdev.so 00:04:14.770 SO libspdk_accel_error.so.2.0 00:04:14.770 SO libspdk_accel_ioat.so.6.0 00:04:14.770 SYMLINK libspdk_accel_error.so 00:04:14.770 CC module/accel/dsa/accel_dsa_rpc.o 00:04:14.770 SYMLINK libspdk_accel_ioat.so 00:04:14.770 CC module/accel/iaa/accel_iaa.o 00:04:15.029 LIB libspdk_keyring_linux.a 00:04:15.029 CC module/accel/iaa/accel_iaa_rpc.o 00:04:15.029 SO libspdk_keyring_linux.so.1.0 00:04:15.029 CC module/bdev/delay/vbdev_delay.o 00:04:15.029 LIB libspdk_fsdev_aio.a 00:04:15.029 CC module/bdev/gpt/gpt.o 00:04:15.029 CC module/bdev/error/vbdev_error.o 00:04:15.029 CC module/bdev/error/vbdev_error_rpc.o 00:04:15.029 LIB libspdk_accel_dsa.a 00:04:15.029 SYMLINK libspdk_keyring_linux.so 00:04:15.029 SO libspdk_fsdev_aio.so.1.0 00:04:15.029 CC module/bdev/gpt/vbdev_gpt.o 00:04:15.029 LIB libspdk_accel_iaa.a 00:04:15.029 SO libspdk_accel_dsa.so.5.0 00:04:15.029 SO libspdk_accel_iaa.so.3.0 00:04:15.029 SYMLINK libspdk_accel_dsa.so 00:04:15.029 SYMLINK libspdk_fsdev_aio.so 00:04:15.029 CC module/blobfs/bdev/blobfs_bdev.o 00:04:15.029 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:15.029 SYMLINK libspdk_accel_iaa.so 00:04:15.411 LIB libspdk_sock_posix.a 00:04:15.411 SO libspdk_sock_posix.so.6.0 00:04:15.411 LIB libspdk_bdev_error.a 00:04:15.411 CC module/bdev/lvol/vbdev_lvol.o 00:04:15.411 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:15.411 SO libspdk_bdev_error.so.6.0 00:04:15.411 CC module/bdev/malloc/bdev_malloc.o 00:04:15.411 SYMLINK libspdk_sock_posix.so 00:04:15.411 CC module/bdev/null/bdev_null.o 00:04:15.411 LIB libspdk_bdev_gpt.a 00:04:15.411 CC module/bdev/null/bdev_null_rpc.o 00:04:15.411 SO libspdk_bdev_gpt.so.6.0 00:04:15.411 SYMLINK libspdk_bdev_error.so 00:04:15.411 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:15.411 CC module/bdev/nvme/bdev_nvme.o 00:04:15.411 LIB libspdk_bdev_delay.a 00:04:15.411 SYMLINK libspdk_bdev_gpt.so 00:04:15.411 SO libspdk_bdev_delay.so.6.0 00:04:15.411 LIB libspdk_blobfs_bdev.a 00:04:15.411 CC module/bdev/passthru/vbdev_passthru.o 00:04:15.411 SYMLINK libspdk_bdev_delay.so 00:04:15.411 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:15.411 SO libspdk_blobfs_bdev.so.6.0 00:04:15.411 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:15.411 CC module/bdev/nvme/nvme_rpc.o 00:04:15.411 SYMLINK libspdk_blobfs_bdev.so 00:04:15.411 CC module/bdev/nvme/bdev_mdns_client.o 00:04:15.411 LIB libspdk_bdev_null.a 00:04:15.670 CC module/bdev/raid/bdev_raid.o 00:04:15.670 SO libspdk_bdev_null.so.6.0 00:04:15.670 SYMLINK libspdk_bdev_null.so 00:04:15.670 CC module/bdev/nvme/vbdev_opal.o 00:04:15.670 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:15.670 LIB libspdk_bdev_malloc.a 00:04:15.670 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:15.670 SO libspdk_bdev_malloc.so.6.0 00:04:15.670 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:15.670 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.670 SYMLINK libspdk_bdev_malloc.so 00:04:15.670 LIB libspdk_bdev_passthru.a 00:04:15.670 SO libspdk_bdev_passthru.so.6.0 00:04:15.930 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.930 SYMLINK libspdk_bdev_passthru.so 00:04:15.930 CC module/bdev/split/vbdev_split.o 00:04:15.930 CC module/bdev/raid/raid0.o 00:04:15.930 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:15.930 CC module/bdev/xnvme/bdev_xnvme.o 00:04:15.930 LIB libspdk_bdev_lvol.a 00:04:15.930 CC module/bdev/aio/bdev_aio.o 00:04:15.930 SO libspdk_bdev_lvol.so.6.0 00:04:15.930 CC module/bdev/aio/bdev_aio_rpc.o 00:04:15.930 CC module/bdev/split/vbdev_split_rpc.o 00:04:15.930 SYMLINK libspdk_bdev_lvol.so 00:04:16.189 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:16.189 CC module/bdev/raid/raid1.o 00:04:16.189 CC module/bdev/raid/concat.o 00:04:16.189 LIB libspdk_bdev_split.a 00:04:16.189 CC module/bdev/ftl/bdev_ftl.o 00:04:16.189 SO libspdk_bdev_split.so.6.0 00:04:16.189 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:16.189 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:16.189 SYMLINK libspdk_bdev_split.so 00:04:16.189 LIB libspdk_bdev_zone_block.a 00:04:16.189 SO libspdk_bdev_zone_block.so.6.0 00:04:16.189 LIB libspdk_bdev_xnvme.a 00:04:16.189 SYMLINK libspdk_bdev_zone_block.so 00:04:16.189 SO libspdk_bdev_xnvme.so.3.0 00:04:16.448 LIB libspdk_bdev_aio.a 00:04:16.448 CC module/bdev/iscsi/bdev_iscsi.o 00:04:16.448 SO libspdk_bdev_aio.so.6.0 00:04:16.448 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:16.448 SYMLINK libspdk_bdev_xnvme.so 00:04:16.448 LIB libspdk_bdev_ftl.a 00:04:16.448 SYMLINK libspdk_bdev_aio.so 00:04:16.448 SO libspdk_bdev_ftl.so.6.0 00:04:16.448 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:16.448 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:16.448 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:16.448 SYMLINK libspdk_bdev_ftl.so 00:04:16.448 LIB libspdk_bdev_raid.a 00:04:16.448 SO libspdk_bdev_raid.so.6.0 00:04:16.707 SYMLINK libspdk_bdev_raid.so 00:04:16.707 LIB libspdk_bdev_iscsi.a 00:04:16.707 SO libspdk_bdev_iscsi.so.6.0 00:04:16.707 SYMLINK libspdk_bdev_iscsi.so 00:04:16.965 LIB libspdk_bdev_virtio.a 00:04:16.965 SO libspdk_bdev_virtio.so.6.0 00:04:16.965 SYMLINK libspdk_bdev_virtio.so 00:04:18.341 LIB libspdk_bdev_nvme.a 00:04:18.341 SO libspdk_bdev_nvme.so.7.1 00:04:18.341 SYMLINK libspdk_bdev_nvme.so 00:04:18.598 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:18.598 CC module/event/subsystems/scheduler/scheduler.o 00:04:18.598 CC module/event/subsystems/vmd/vmd.o 00:04:18.598 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:18.598 CC module/event/subsystems/iobuf/iobuf.o 00:04:18.598 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:18.598 CC module/event/subsystems/keyring/keyring.o 00:04:18.598 CC module/event/subsystems/fsdev/fsdev.o 00:04:18.598 CC module/event/subsystems/sock/sock.o 00:04:18.598 LIB libspdk_event_scheduler.a 00:04:18.598 LIB libspdk_event_vmd.a 00:04:18.598 LIB libspdk_event_keyring.a 00:04:18.598 LIB libspdk_event_vhost_blk.a 00:04:18.598 SO libspdk_event_scheduler.so.4.0 00:04:18.598 LIB libspdk_event_fsdev.a 00:04:18.598 SO libspdk_event_keyring.so.1.0 00:04:18.598 SO libspdk_event_vmd.so.6.0 00:04:18.598 LIB libspdk_event_sock.a 00:04:18.598 LIB libspdk_event_iobuf.a 00:04:18.598 SO libspdk_event_vhost_blk.so.3.0 00:04:18.598 SO libspdk_event_fsdev.so.1.0 00:04:18.598 SO libspdk_event_sock.so.5.0 00:04:18.598 SO libspdk_event_iobuf.so.3.0 00:04:18.598 SYMLINK libspdk_event_scheduler.so 00:04:18.598 SYMLINK libspdk_event_keyring.so 00:04:18.598 SYMLINK libspdk_event_vhost_blk.so 00:04:18.857 SYMLINK libspdk_event_vmd.so 00:04:18.857 SYMLINK libspdk_event_fsdev.so 00:04:18.857 SYMLINK libspdk_event_sock.so 00:04:18.857 SYMLINK libspdk_event_iobuf.so 00:04:18.857 CC module/event/subsystems/accel/accel.o 00:04:19.115 LIB libspdk_event_accel.a 00:04:19.115 SO libspdk_event_accel.so.6.0 00:04:19.115 SYMLINK libspdk_event_accel.so 00:04:19.373 CC module/event/subsystems/bdev/bdev.o 00:04:19.630 LIB libspdk_event_bdev.a 00:04:19.630 SO libspdk_event_bdev.so.6.0 00:04:19.630 SYMLINK libspdk_event_bdev.so 00:04:19.888 CC module/event/subsystems/scsi/scsi.o 00:04:19.888 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:19.888 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:19.888 CC module/event/subsystems/ublk/ublk.o 00:04:19.888 CC module/event/subsystems/nbd/nbd.o 00:04:19.888 LIB libspdk_event_scsi.a 00:04:19.888 LIB libspdk_event_ublk.a 00:04:19.888 LIB libspdk_event_nbd.a 00:04:19.888 SO libspdk_event_scsi.so.6.0 00:04:19.888 SO libspdk_event_ublk.so.3.0 00:04:19.888 SO libspdk_event_nbd.so.6.0 00:04:19.888 LIB libspdk_event_nvmf.a 00:04:19.888 SYMLINK libspdk_event_ublk.so 00:04:19.888 SYMLINK libspdk_event_scsi.so 00:04:19.888 SYMLINK libspdk_event_nbd.so 00:04:19.888 SO libspdk_event_nvmf.so.6.0 00:04:20.145 SYMLINK libspdk_event_nvmf.so 00:04:20.145 CC module/event/subsystems/iscsi/iscsi.o 00:04:20.145 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:20.403 LIB libspdk_event_vhost_scsi.a 00:04:20.403 LIB libspdk_event_iscsi.a 00:04:20.403 SO libspdk_event_vhost_scsi.so.3.0 00:04:20.403 SO libspdk_event_iscsi.so.6.0 00:04:20.403 SYMLINK libspdk_event_vhost_scsi.so 00:04:20.403 SYMLINK libspdk_event_iscsi.so 00:04:20.403 SO libspdk.so.6.0 00:04:20.403 SYMLINK libspdk.so 00:04:20.665 CC app/trace_record/trace_record.o 00:04:20.665 CXX app/trace/trace.o 00:04:20.665 TEST_HEADER include/spdk/accel.h 00:04:20.665 TEST_HEADER include/spdk/accel_module.h 00:04:20.665 TEST_HEADER include/spdk/assert.h 00:04:20.665 TEST_HEADER include/spdk/barrier.h 00:04:20.665 TEST_HEADER include/spdk/base64.h 00:04:20.665 TEST_HEADER include/spdk/bdev.h 00:04:20.665 TEST_HEADER include/spdk/bdev_module.h 00:04:20.665 TEST_HEADER include/spdk/bdev_zone.h 00:04:20.665 TEST_HEADER include/spdk/bit_array.h 00:04:20.665 TEST_HEADER include/spdk/bit_pool.h 00:04:20.665 TEST_HEADER include/spdk/blob_bdev.h 00:04:20.665 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:20.665 TEST_HEADER include/spdk/blobfs.h 00:04:20.665 TEST_HEADER include/spdk/blob.h 00:04:20.665 TEST_HEADER include/spdk/conf.h 00:04:20.665 TEST_HEADER include/spdk/config.h 00:04:20.665 TEST_HEADER include/spdk/cpuset.h 00:04:20.665 TEST_HEADER include/spdk/crc16.h 00:04:20.665 TEST_HEADER include/spdk/crc32.h 00:04:20.665 TEST_HEADER include/spdk/crc64.h 00:04:20.665 TEST_HEADER include/spdk/dif.h 00:04:20.665 TEST_HEADER include/spdk/dma.h 00:04:20.665 TEST_HEADER include/spdk/endian.h 00:04:20.665 TEST_HEADER include/spdk/env_dpdk.h 00:04:20.665 TEST_HEADER include/spdk/env.h 00:04:20.665 CC app/nvmf_tgt/nvmf_main.o 00:04:20.665 TEST_HEADER include/spdk/event.h 00:04:20.665 CC examples/util/zipf/zipf.o 00:04:20.665 TEST_HEADER include/spdk/fd_group.h 00:04:20.665 CC examples/ioat/perf/perf.o 00:04:20.665 TEST_HEADER include/spdk/fd.h 00:04:20.665 TEST_HEADER include/spdk/file.h 00:04:20.665 TEST_HEADER include/spdk/fsdev.h 00:04:20.665 TEST_HEADER include/spdk/fsdev_module.h 00:04:20.665 TEST_HEADER include/spdk/ftl.h 00:04:20.665 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:20.665 TEST_HEADER include/spdk/gpt_spec.h 00:04:20.665 CC test/thread/poller_perf/poller_perf.o 00:04:20.665 TEST_HEADER include/spdk/hexlify.h 00:04:20.665 TEST_HEADER include/spdk/histogram_data.h 00:04:20.665 TEST_HEADER include/spdk/idxd.h 00:04:20.665 TEST_HEADER include/spdk/idxd_spec.h 00:04:20.665 TEST_HEADER include/spdk/init.h 00:04:20.666 TEST_HEADER include/spdk/ioat.h 00:04:20.666 TEST_HEADER include/spdk/ioat_spec.h 00:04:20.666 TEST_HEADER include/spdk/iscsi_spec.h 00:04:20.666 TEST_HEADER include/spdk/json.h 00:04:20.666 TEST_HEADER include/spdk/jsonrpc.h 00:04:20.666 TEST_HEADER include/spdk/keyring.h 00:04:20.666 TEST_HEADER include/spdk/keyring_module.h 00:04:20.666 TEST_HEADER include/spdk/likely.h 00:04:20.666 TEST_HEADER include/spdk/log.h 00:04:20.666 TEST_HEADER include/spdk/lvol.h 00:04:20.666 CC test/dma/test_dma/test_dma.o 00:04:20.666 TEST_HEADER include/spdk/md5.h 00:04:20.666 TEST_HEADER include/spdk/memory.h 00:04:20.666 TEST_HEADER include/spdk/mmio.h 00:04:20.666 TEST_HEADER include/spdk/nbd.h 00:04:20.666 TEST_HEADER include/spdk/net.h 00:04:20.666 TEST_HEADER include/spdk/notify.h 00:04:20.666 TEST_HEADER include/spdk/nvme.h 00:04:20.666 TEST_HEADER include/spdk/nvme_intel.h 00:04:20.666 CC test/app/bdev_svc/bdev_svc.o 00:04:20.666 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:20.666 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:20.666 TEST_HEADER include/spdk/nvme_spec.h 00:04:20.666 TEST_HEADER include/spdk/nvme_zns.h 00:04:20.666 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:20.666 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:20.666 TEST_HEADER include/spdk/nvmf.h 00:04:20.666 TEST_HEADER include/spdk/nvmf_spec.h 00:04:20.666 TEST_HEADER include/spdk/nvmf_transport.h 00:04:20.666 TEST_HEADER include/spdk/opal.h 00:04:20.666 TEST_HEADER include/spdk/opal_spec.h 00:04:20.666 TEST_HEADER include/spdk/pci_ids.h 00:04:20.666 TEST_HEADER include/spdk/pipe.h 00:04:20.666 TEST_HEADER include/spdk/queue.h 00:04:20.666 TEST_HEADER include/spdk/reduce.h 00:04:20.666 TEST_HEADER include/spdk/rpc.h 00:04:20.666 CC test/env/mem_callbacks/mem_callbacks.o 00:04:20.666 TEST_HEADER include/spdk/scheduler.h 00:04:20.666 TEST_HEADER include/spdk/scsi.h 00:04:20.666 TEST_HEADER include/spdk/scsi_spec.h 00:04:20.666 TEST_HEADER include/spdk/sock.h 00:04:20.666 TEST_HEADER include/spdk/stdinc.h 00:04:20.923 TEST_HEADER include/spdk/string.h 00:04:20.923 TEST_HEADER include/spdk/thread.h 00:04:20.923 TEST_HEADER include/spdk/trace.h 00:04:20.923 TEST_HEADER include/spdk/trace_parser.h 00:04:20.923 TEST_HEADER include/spdk/tree.h 00:04:20.923 TEST_HEADER include/spdk/ublk.h 00:04:20.923 TEST_HEADER include/spdk/util.h 00:04:20.923 TEST_HEADER include/spdk/uuid.h 00:04:20.923 TEST_HEADER include/spdk/version.h 00:04:20.923 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:20.923 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:20.923 TEST_HEADER include/spdk/vhost.h 00:04:20.923 TEST_HEADER include/spdk/vmd.h 00:04:20.923 TEST_HEADER include/spdk/xor.h 00:04:20.923 TEST_HEADER include/spdk/zipf.h 00:04:20.923 CXX test/cpp_headers/accel.o 00:04:20.923 LINK zipf 00:04:20.923 LINK nvmf_tgt 00:04:20.923 LINK poller_perf 00:04:20.923 LINK ioat_perf 00:04:20.923 LINK spdk_trace_record 00:04:20.923 LINK bdev_svc 00:04:20.923 LINK spdk_trace 00:04:20.923 CXX test/cpp_headers/accel_module.o 00:04:20.923 CC test/rpc_client/rpc_client_test.o 00:04:20.923 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:20.923 CC examples/ioat/verify/verify.o 00:04:21.181 CXX test/cpp_headers/assert.o 00:04:21.181 CC test/app/histogram_perf/histogram_perf.o 00:04:21.181 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:21.181 LINK test_dma 00:04:21.181 LINK rpc_client_test 00:04:21.181 CC app/iscsi_tgt/iscsi_tgt.o 00:04:21.181 LINK mem_callbacks 00:04:21.181 LINK interrupt_tgt 00:04:21.181 CXX test/cpp_headers/barrier.o 00:04:21.181 CC app/spdk_tgt/spdk_tgt.o 00:04:21.181 LINK histogram_perf 00:04:21.181 CXX test/cpp_headers/base64.o 00:04:21.181 CXX test/cpp_headers/bdev.o 00:04:21.181 LINK verify 00:04:21.181 CXX test/cpp_headers/bdev_module.o 00:04:21.181 LINK iscsi_tgt 00:04:21.437 CC test/env/vtophys/vtophys.o 00:04:21.437 CXX test/cpp_headers/bdev_zone.o 00:04:21.437 LINK spdk_tgt 00:04:21.437 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:21.437 CXX test/cpp_headers/bit_array.o 00:04:21.437 LINK vtophys 00:04:21.437 LINK nvme_fuzz 00:04:21.437 CXX test/cpp_headers/bit_pool.o 00:04:21.437 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:21.437 CC examples/sock/hello_world/hello_sock.o 00:04:21.437 CC examples/vmd/lsvmd/lsvmd.o 00:04:21.437 CC examples/thread/thread/thread_ex.o 00:04:21.437 CC app/spdk_lspci/spdk_lspci.o 00:04:21.694 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:21.694 CXX test/cpp_headers/blob_bdev.o 00:04:21.694 CC examples/idxd/perf/perf.o 00:04:21.694 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:21.694 CC test/env/memory/memory_ut.o 00:04:21.694 LINK spdk_lspci 00:04:21.694 LINK lsvmd 00:04:21.694 LINK hello_sock 00:04:21.694 LINK env_dpdk_post_init 00:04:21.694 CXX test/cpp_headers/blobfs_bdev.o 00:04:21.694 LINK thread 00:04:21.951 CC examples/vmd/led/led.o 00:04:21.951 CC app/spdk_nvme_perf/perf.o 00:04:21.951 CXX test/cpp_headers/blobfs.o 00:04:21.951 LINK idxd_perf 00:04:21.951 LINK vhost_fuzz 00:04:21.951 LINK led 00:04:21.951 CC test/event/event_perf/event_perf.o 00:04:21.951 CC examples/nvme/hello_world/hello_world.o 00:04:21.951 CC test/event/reactor/reactor.o 00:04:21.951 CXX test/cpp_headers/blob.o 00:04:21.951 CC test/event/reactor_perf/reactor_perf.o 00:04:22.207 LINK event_perf 00:04:22.207 LINK reactor 00:04:22.207 CC test/event/app_repeat/app_repeat.o 00:04:22.207 LINK reactor_perf 00:04:22.207 LINK hello_world 00:04:22.207 CC test/event/scheduler/scheduler.o 00:04:22.207 CXX test/cpp_headers/conf.o 00:04:22.207 CXX test/cpp_headers/config.o 00:04:22.207 LINK app_repeat 00:04:22.207 CC app/spdk_nvme_identify/identify.o 00:04:22.207 CC app/spdk_nvme_discover/discovery_aer.o 00:04:22.207 CXX test/cpp_headers/cpuset.o 00:04:22.207 CC app/spdk_top/spdk_top.o 00:04:22.463 LINK scheduler 00:04:22.463 CC examples/nvme/reconnect/reconnect.o 00:04:22.463 LINK spdk_nvme_discover 00:04:22.463 CXX test/cpp_headers/crc16.o 00:04:22.463 CC examples/accel/perf/accel_perf.o 00:04:22.463 LINK spdk_nvme_perf 00:04:22.463 CXX test/cpp_headers/crc32.o 00:04:22.719 LINK reconnect 00:04:22.719 CXX test/cpp_headers/crc64.o 00:04:22.719 CC examples/blob/hello_world/hello_blob.o 00:04:22.719 CC test/app/jsoncat/jsoncat.o 00:04:22.719 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:22.719 LINK memory_ut 00:04:22.719 LINK iscsi_fuzz 00:04:22.719 CXX test/cpp_headers/dif.o 00:04:22.719 LINK jsoncat 00:04:22.976 LINK hello_blob 00:04:22.976 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.976 LINK accel_perf 00:04:22.976 LINK spdk_nvme_identify 00:04:22.976 CXX test/cpp_headers/dma.o 00:04:22.976 CC test/env/pci/pci_ut.o 00:04:22.976 LINK hello_fsdev 00:04:22.976 CC app/vhost/vhost.o 00:04:22.976 CC test/app/stub/stub.o 00:04:22.976 LINK spdk_top 00:04:22.976 CXX test/cpp_headers/endian.o 00:04:23.232 CC examples/blob/cli/blobcli.o 00:04:23.232 CC app/spdk_dd/spdk_dd.o 00:04:23.232 LINK stub 00:04:23.232 LINK vhost 00:04:23.232 CXX test/cpp_headers/env_dpdk.o 00:04:23.232 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.232 CC examples/nvme/arbitration/arbitration.o 00:04:23.232 CC examples/bdev/bdevperf/bdevperf.o 00:04:23.232 LINK pci_ut 00:04:23.232 CXX test/cpp_headers/env.o 00:04:23.489 LINK nvme_manage 00:04:23.489 CC test/nvme/aer/aer.o 00:04:23.489 LINK hello_bdev 00:04:23.489 CXX test/cpp_headers/event.o 00:04:23.489 LINK arbitration 00:04:23.489 LINK spdk_dd 00:04:23.489 CC test/accel/dif/dif.o 00:04:23.489 LINK blobcli 00:04:23.489 CXX test/cpp_headers/fd_group.o 00:04:23.748 LINK aer 00:04:23.748 CC test/blobfs/mkfs/mkfs.o 00:04:23.748 CC examples/nvme/hotplug/hotplug.o 00:04:23.748 CC app/fio/nvme/fio_plugin.o 00:04:23.748 CC test/nvme/reset/reset.o 00:04:23.748 CXX test/cpp_headers/fd.o 00:04:23.748 CC test/nvme/sgl/sgl.o 00:04:23.748 CC test/lvol/esnap/esnap.o 00:04:23.748 LINK mkfs 00:04:23.748 CXX test/cpp_headers/file.o 00:04:23.748 LINK hotplug 00:04:24.005 CC app/fio/bdev/fio_plugin.o 00:04:24.005 LINK bdevperf 00:04:24.005 LINK reset 00:04:24.005 LINK sgl 00:04:24.005 CXX test/cpp_headers/fsdev.o 00:04:24.005 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:24.005 CC test/nvme/e2edp/nvme_dp.o 00:04:24.005 LINK dif 00:04:24.005 CC test/nvme/overhead/overhead.o 00:04:24.005 CXX test/cpp_headers/fsdev_module.o 00:04:24.264 LINK spdk_nvme 00:04:24.264 LINK cmb_copy 00:04:24.264 CC test/nvme/err_injection/err_injection.o 00:04:24.264 CC examples/nvme/abort/abort.o 00:04:24.264 LINK nvme_dp 00:04:24.264 CXX test/cpp_headers/ftl.o 00:04:24.264 CC test/nvme/startup/startup.o 00:04:24.264 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.264 LINK spdk_bdev 00:04:24.264 LINK err_injection 00:04:24.264 CC test/nvme/reserve/reserve.o 00:04:24.264 CXX test/cpp_headers/fuse_dispatcher.o 00:04:24.264 CXX test/cpp_headers/gpt_spec.o 00:04:24.264 LINK overhead 00:04:24.523 LINK pmr_persistence 00:04:24.523 LINK abort 00:04:24.523 LINK startup 00:04:24.523 CC test/nvme/simple_copy/simple_copy.o 00:04:24.523 LINK reserve 00:04:24.523 CXX test/cpp_headers/hexlify.o 00:04:24.523 CXX test/cpp_headers/histogram_data.o 00:04:24.523 CC test/nvme/connect_stress/connect_stress.o 00:04:24.523 CC test/nvme/compliance/nvme_compliance.o 00:04:24.523 CC test/nvme/boot_partition/boot_partition.o 00:04:24.781 CC test/nvme/fused_ordering/fused_ordering.o 00:04:24.781 CXX test/cpp_headers/idxd.o 00:04:24.781 LINK simple_copy 00:04:24.781 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:24.781 CC examples/nvmf/nvmf/nvmf.o 00:04:24.781 CC test/nvme/fdp/fdp.o 00:04:24.781 LINK boot_partition 00:04:24.781 LINK connect_stress 00:04:24.781 CXX test/cpp_headers/idxd_spec.o 00:04:24.781 LINK fused_ordering 00:04:24.781 CC test/nvme/cuse/cuse.o 00:04:24.781 LINK doorbell_aers 00:04:24.781 LINK nvme_compliance 00:04:24.781 CXX test/cpp_headers/init.o 00:04:25.038 CXX test/cpp_headers/ioat.o 00:04:25.038 CXX test/cpp_headers/ioat_spec.o 00:04:25.038 LINK nvmf 00:04:25.038 CXX test/cpp_headers/iscsi_spec.o 00:04:25.038 LINK fdp 00:04:25.038 CXX test/cpp_headers/json.o 00:04:25.038 CXX test/cpp_headers/keyring.o 00:04:25.038 CXX test/cpp_headers/jsonrpc.o 00:04:25.038 CXX test/cpp_headers/keyring_module.o 00:04:25.038 CXX test/cpp_headers/likely.o 00:04:25.038 CXX test/cpp_headers/log.o 00:04:25.038 CXX test/cpp_headers/lvol.o 00:04:25.358 CXX test/cpp_headers/md5.o 00:04:25.358 CXX test/cpp_headers/memory.o 00:04:25.358 CXX test/cpp_headers/mmio.o 00:04:25.358 CXX test/cpp_headers/nbd.o 00:04:25.358 CXX test/cpp_headers/net.o 00:04:25.358 CC test/bdev/bdevio/bdevio.o 00:04:25.358 CXX test/cpp_headers/notify.o 00:04:25.358 CXX test/cpp_headers/nvme.o 00:04:25.358 CXX test/cpp_headers/nvme_intel.o 00:04:25.358 CXX test/cpp_headers/nvme_ocssd.o 00:04:25.358 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:25.358 CXX test/cpp_headers/nvme_spec.o 00:04:25.358 CXX test/cpp_headers/nvme_zns.o 00:04:25.358 CXX test/cpp_headers/nvmf_cmd.o 00:04:25.358 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:25.358 CXX test/cpp_headers/nvmf.o 00:04:25.649 CXX test/cpp_headers/nvmf_spec.o 00:04:25.649 CXX test/cpp_headers/nvmf_transport.o 00:04:25.649 CXX test/cpp_headers/opal.o 00:04:25.649 CXX test/cpp_headers/opal_spec.o 00:04:25.649 CXX test/cpp_headers/pci_ids.o 00:04:25.649 CXX test/cpp_headers/pipe.o 00:04:25.649 LINK bdevio 00:04:25.649 CXX test/cpp_headers/queue.o 00:04:25.649 CXX test/cpp_headers/reduce.o 00:04:25.649 CXX test/cpp_headers/rpc.o 00:04:25.649 CXX test/cpp_headers/scheduler.o 00:04:25.649 CXX test/cpp_headers/scsi.o 00:04:25.649 CXX test/cpp_headers/scsi_spec.o 00:04:25.649 CXX test/cpp_headers/sock.o 00:04:25.649 CXX test/cpp_headers/stdinc.o 00:04:25.649 CXX test/cpp_headers/string.o 00:04:25.649 CXX test/cpp_headers/thread.o 00:04:25.907 CXX test/cpp_headers/trace.o 00:04:25.907 CXX test/cpp_headers/trace_parser.o 00:04:25.908 CXX test/cpp_headers/tree.o 00:04:25.908 CXX test/cpp_headers/ublk.o 00:04:25.908 CXX test/cpp_headers/util.o 00:04:25.908 CXX test/cpp_headers/uuid.o 00:04:25.908 CXX test/cpp_headers/version.o 00:04:25.908 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.908 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.908 CXX test/cpp_headers/vhost.o 00:04:25.908 CXX test/cpp_headers/vmd.o 00:04:25.908 CXX test/cpp_headers/xor.o 00:04:25.908 CXX test/cpp_headers/zipf.o 00:04:25.908 LINK cuse 00:04:29.192 LINK esnap 00:04:29.193 00:04:29.193 real 1m7.736s 00:04:29.193 user 6m19.581s 00:04:29.193 sys 1m8.597s 00:04:29.193 21:55:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:29.193 21:55:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:29.193 ************************************ 00:04:29.193 END TEST make 00:04:29.193 ************************************ 00:04:29.193 21:55:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:29.193 21:55:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:29.193 21:55:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:29.193 21:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.193 21:55:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:29.193 21:55:01 -- pm/common@44 -- $ pid=5071 00:04:29.193 21:55:01 -- pm/common@50 -- $ kill -TERM 5071 00:04:29.193 21:55:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.193 21:55:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:29.193 21:55:01 -- pm/common@44 -- $ pid=5072 00:04:29.193 21:55:01 -- pm/common@50 -- $ kill -TERM 5072 00:04:29.193 21:55:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:29.193 21:55:01 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:29.193 21:55:01 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.193 21:55:01 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.193 21:55:01 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.193 21:55:01 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.193 21:55:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.193 21:55:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.193 21:55:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.193 21:55:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.193 21:55:01 -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.193 21:55:01 -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.193 21:55:01 -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.193 21:55:01 -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.193 21:55:01 -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.193 21:55:01 -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.193 21:55:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.193 21:55:01 -- scripts/common.sh@344 -- # case "$op" in 00:04:29.193 21:55:01 -- scripts/common.sh@345 -- # : 1 00:04:29.193 21:55:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.193 21:55:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.193 21:55:01 -- scripts/common.sh@365 -- # decimal 1 00:04:29.193 21:55:01 -- scripts/common.sh@353 -- # local d=1 00:04:29.193 21:55:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.193 21:55:01 -- scripts/common.sh@355 -- # echo 1 00:04:29.193 21:55:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.193 21:55:01 -- scripts/common.sh@366 -- # decimal 2 00:04:29.193 21:55:01 -- scripts/common.sh@353 -- # local d=2 00:04:29.193 21:55:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.193 21:55:01 -- scripts/common.sh@355 -- # echo 2 00:04:29.193 21:55:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.193 21:55:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.193 21:55:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.193 21:55:01 -- scripts/common.sh@368 -- # return 0 00:04:29.193 21:55:01 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.193 21:55:01 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.193 --rc genhtml_branch_coverage=1 00:04:29.193 --rc genhtml_function_coverage=1 00:04:29.193 --rc genhtml_legend=1 00:04:29.193 --rc geninfo_all_blocks=1 00:04:29.193 --rc geninfo_unexecuted_blocks=1 00:04:29.193 00:04:29.193 ' 00:04:29.193 21:55:01 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.193 --rc genhtml_branch_coverage=1 00:04:29.193 --rc genhtml_function_coverage=1 00:04:29.193 --rc genhtml_legend=1 00:04:29.193 --rc geninfo_all_blocks=1 00:04:29.193 --rc geninfo_unexecuted_blocks=1 00:04:29.193 00:04:29.193 ' 00:04:29.193 21:55:01 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.193 --rc genhtml_branch_coverage=1 00:04:29.193 --rc genhtml_function_coverage=1 00:04:29.193 --rc genhtml_legend=1 00:04:29.193 --rc geninfo_all_blocks=1 00:04:29.193 --rc geninfo_unexecuted_blocks=1 00:04:29.193 00:04:29.193 ' 00:04:29.193 21:55:01 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.193 --rc genhtml_branch_coverage=1 00:04:29.193 --rc genhtml_function_coverage=1 00:04:29.193 --rc genhtml_legend=1 00:04:29.193 --rc geninfo_all_blocks=1 00:04:29.193 --rc geninfo_unexecuted_blocks=1 00:04:29.193 00:04:29.193 ' 00:04:29.193 21:55:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.193 21:55:01 -- nvmf/common.sh@7 -- # uname -s 00:04:29.193 21:55:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.193 21:55:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.193 21:55:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.193 21:55:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.193 21:55:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.193 21:55:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.193 21:55:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.193 21:55:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.193 21:55:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.193 21:55:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.193 21:55:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:04:29.193 21:55:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:04:29.193 21:55:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.193 21:55:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.193 21:55:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.193 21:55:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.193 21:55:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.193 21:55:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:29.193 21:55:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.193 21:55:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.193 21:55:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.193 21:55:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.193 21:55:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.193 21:55:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.193 21:55:01 -- paths/export.sh@5 -- # export PATH 00:04:29.193 21:55:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.193 21:55:01 -- nvmf/common.sh@51 -- # : 0 00:04:29.193 21:55:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:29.193 21:55:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:29.193 21:55:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.193 21:55:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.193 21:55:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.193 21:55:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:29.193 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:29.193 21:55:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:29.193 21:55:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:29.193 21:55:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:29.193 21:55:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:29.193 21:55:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:29.193 21:55:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:29.193 21:55:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:29.193 21:55:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.193 21:55:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:29.193 21:55:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.193 21:55:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:29.193 21:55:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:29.193 21:55:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:29.193 21:55:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54256 00:04:29.193 21:55:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:29.193 21:55:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:29.193 21:55:01 -- pm/common@17 -- # local monitor 00:04:29.193 21:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.193 21:55:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.193 21:55:01 -- pm/common@25 -- # sleep 1 00:04:29.193 21:55:01 -- pm/common@21 -- # date +%s 00:04:29.193 21:55:01 -- pm/common@21 -- # date +%s 00:04:29.194 21:55:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733522101 00:04:29.194 21:55:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733522101 00:04:29.194 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733522101_collect-vmstat.pm.log 00:04:29.194 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733522101_collect-cpu-load.pm.log 00:04:30.128 21:55:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:30.128 21:55:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:30.128 21:55:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.128 21:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.128 21:55:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:30.128 21:55:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:30.128 21:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.128 21:55:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:30.128 21:55:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:30.128 21:55:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:30.128 21:55:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:30.128 21:55:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:30.128 21:55:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:30.128 21:55:02 -- common/autotest_common.sh@1457 -- # uname 00:04:30.128 21:55:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:30.128 21:55:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:30.128 21:55:02 -- common/autotest_common.sh@1477 -- # uname 00:04:30.128 21:55:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:30.128 21:55:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:30.128 21:55:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:30.386 lcov: LCOV version 1.15 00:04:30.386 21:55:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:45.267 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.155 21:55:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:00.155 21:55:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.155 21:55:31 -- common/autotest_common.sh@10 -- # set +x 00:05:00.155 21:55:31 -- spdk/autotest.sh@78 -- # rm -f 00:05:00.155 21:55:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.155 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:00.155 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:00.155 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:00.155 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:00.155 21:55:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:00.155 21:55:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:00.155 21:55:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:00.155 21:55:32 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:00.155 21:55:32 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:00.155 21:55:32 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:00.155 21:55:32 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.155 21:55:32 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:00.155 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.155 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:00.155 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:00.155 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.155 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.155 21:55:32 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.155 21:55:32 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:00.155 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.155 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:00.156 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:00.156 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:00.156 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:00.156 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:00.156 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:00.156 21:55:32 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.156 21:55:32 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:00.156 21:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:00.156 21:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115438 s, 90.8 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426977 s, 246 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042001 s, 250 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00256186 s, 409 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00262675 s, 399 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.156 21:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.156 21:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:00.156 21:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:00.156 21:55:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:00.156 No valid GPT data, bailing 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:00.156 21:55:32 -- scripts/common.sh@394 -- # pt= 00:05:00.156 21:55:32 -- scripts/common.sh@395 -- # return 1 00:05:00.156 21:55:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:00.156 1+0 records in 00:05:00.156 1+0 records out 00:05:00.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00295085 s, 355 MB/s 00:05:00.156 21:55:32 -- spdk/autotest.sh@105 -- # sync 00:05:00.156 21:55:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.156 21:55:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.156 21:55:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.526 21:55:34 -- spdk/autotest.sh@111 -- # uname -s 00:05:01.526 21:55:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:01.526 21:55:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:01.526 21:55:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.351 Hugepages 00:05:02.351 node hugesize free / total 00:05:02.351 node0 1048576kB 0 / 0 00:05:02.351 node0 2048kB 0 / 0 00:05:02.351 00:05:02.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.351 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.351 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.609 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:02.609 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:02.609 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.609 21:55:35 -- spdk/autotest.sh@117 -- # uname -s 00:05:02.609 21:55:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:02.609 21:55:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:02.609 21:55:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.433 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.433 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.691 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.691 21:55:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:04.627 21:55:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:04.627 21:55:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:04.627 21:55:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.627 21:55:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:04.627 21:55:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:04.627 21:55:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:04.627 21:55:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.627 21:55:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.627 21:55:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:04.627 21:55:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:04.627 21:55:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:04.627 21:55:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.143 Waiting for block devices as requested 00:05:05.143 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.143 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.401 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.401 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.687 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:10.687 21:55:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.687 21:55:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:10.687 21:55:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.687 21:55:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.687 21:55:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1543 -- # continue 00:05:10.687 21:55:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.687 21:55:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.687 21:55:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:10.687 21:55:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.687 21:55:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.687 21:55:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.687 21:55:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.687 21:55:43 -- common/autotest_common.sh@1543 -- # continue 00:05:10.687 21:55:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.687 21:55:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.687 21:55:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:10.688 21:55:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.688 21:55:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.688 21:55:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1543 -- # continue 00:05:10.688 21:55:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.688 21:55:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:10.688 21:55:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:10.688 21:55:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:10.688 21:55:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.688 21:55:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.688 21:55:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.688 21:55:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.688 21:55:43 -- common/autotest_common.sh@1543 -- # continue 00:05:10.688 21:55:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:10.688 21:55:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.688 21:55:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.688 21:55:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:10.688 21:55:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.688 21:55:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.688 21:55:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.831 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.831 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.831 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.831 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.831 21:55:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:11.831 21:55:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.831 21:55:44 -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 21:55:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:11.831 21:55:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:11.831 21:55:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.831 21:55:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:11.831 21:55:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:11.831 21:55:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:11.831 21:55:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:11.831 21:55:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:11.831 21:55:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.831 21:55:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.831 21:55:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.831 21:55:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.831 21:55:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.831 21:55:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:11.831 21:55:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:11.831 21:55:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:11.831 21:55:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.831 21:55:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:11.831 21:55:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.831 21:55:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:11.831 21:55:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.831 21:55:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:11.831 21:55:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:11.831 21:55:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.831 21:55:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:11.831 21:55:44 -- common/autotest_common.sh@1572 -- # return 0 00:05:11.831 21:55:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:11.831 21:55:44 -- common/autotest_common.sh@1580 -- # return 0 00:05:11.831 21:55:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:12.091 21:55:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:12.091 21:55:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.091 21:55:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.091 21:55:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:12.091 21:55:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.091 21:55:44 -- common/autotest_common.sh@10 -- # set +x 00:05:12.091 21:55:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:12.091 21:55:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.091 21:55:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.091 21:55:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.091 21:55:44 -- common/autotest_common.sh@10 -- # set +x 00:05:12.091 ************************************ 00:05:12.091 START TEST env 00:05:12.091 ************************************ 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.091 * Looking for test storage... 00:05:12.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.091 21:55:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.091 21:55:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.091 21:55:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.091 21:55:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.091 21:55:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.091 21:55:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.091 21:55:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.091 21:55:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.091 21:55:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.091 21:55:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.091 21:55:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.091 21:55:44 env -- scripts/common.sh@344 -- # case "$op" in 00:05:12.091 21:55:44 env -- scripts/common.sh@345 -- # : 1 00:05:12.091 21:55:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.091 21:55:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.091 21:55:44 env -- scripts/common.sh@365 -- # decimal 1 00:05:12.091 21:55:44 env -- scripts/common.sh@353 -- # local d=1 00:05:12.091 21:55:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.091 21:55:44 env -- scripts/common.sh@355 -- # echo 1 00:05:12.091 21:55:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.091 21:55:44 env -- scripts/common.sh@366 -- # decimal 2 00:05:12.091 21:55:44 env -- scripts/common.sh@353 -- # local d=2 00:05:12.091 21:55:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.091 21:55:44 env -- scripts/common.sh@355 -- # echo 2 00:05:12.091 21:55:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.091 21:55:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.091 21:55:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.091 21:55:44 env -- scripts/common.sh@368 -- # return 0 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.091 --rc genhtml_branch_coverage=1 00:05:12.091 --rc genhtml_function_coverage=1 00:05:12.091 --rc genhtml_legend=1 00:05:12.091 --rc geninfo_all_blocks=1 00:05:12.091 --rc geninfo_unexecuted_blocks=1 00:05:12.091 00:05:12.091 ' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.091 --rc genhtml_branch_coverage=1 00:05:12.091 --rc genhtml_function_coverage=1 00:05:12.091 --rc genhtml_legend=1 00:05:12.091 --rc geninfo_all_blocks=1 00:05:12.091 --rc geninfo_unexecuted_blocks=1 00:05:12.091 00:05:12.091 ' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.091 --rc genhtml_branch_coverage=1 00:05:12.091 --rc genhtml_function_coverage=1 00:05:12.091 --rc genhtml_legend=1 00:05:12.091 --rc geninfo_all_blocks=1 00:05:12.091 --rc geninfo_unexecuted_blocks=1 00:05:12.091 00:05:12.091 ' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.091 --rc genhtml_branch_coverage=1 00:05:12.091 --rc genhtml_function_coverage=1 00:05:12.091 --rc genhtml_legend=1 00:05:12.091 --rc geninfo_all_blocks=1 00:05:12.091 --rc geninfo_unexecuted_blocks=1 00:05:12.091 00:05:12.091 ' 00:05:12.091 21:55:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.091 21:55:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.091 21:55:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.091 ************************************ 00:05:12.091 START TEST env_memory 00:05:12.091 ************************************ 00:05:12.091 21:55:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.091 00:05:12.091 00:05:12.091 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.091 http://cunit.sourceforge.net/ 00:05:12.091 00:05:12.091 00:05:12.091 Suite: memory 00:05:12.352 Test: alloc and free memory map ...[2024-12-06 21:55:44.962271] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:12.352 passed 00:05:12.352 Test: mem map translation ...[2024-12-06 21:55:45.001087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:12.352 [2024-12-06 21:55:45.001141] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:12.352 [2024-12-06 21:55:45.001210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:12.352 [2024-12-06 21:55:45.001226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:12.352 passed 00:05:12.352 Test: mem map registration ...[2024-12-06 21:55:45.069294] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:12.352 [2024-12-06 21:55:45.069338] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:12.352 passed 00:05:12.352 Test: mem map adjacent registrations ...passed 00:05:12.352 00:05:12.352 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.352 suites 1 1 n/a 0 0 00:05:12.352 tests 4 4 4 0 0 00:05:12.352 asserts 152 152 152 0 n/a 00:05:12.352 00:05:12.352 Elapsed time = 0.233 seconds 00:05:12.352 00:05:12.352 real 0m0.270s 00:05:12.352 user 0m0.248s 00:05:12.352 sys 0m0.015s 00:05:12.352 ************************************ 00:05:12.352 END TEST env_memory 00:05:12.352 ************************************ 00:05:12.352 21:55:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.352 21:55:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:12.650 21:55:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.650 21:55:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.650 21:55:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.650 21:55:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.650 ************************************ 00:05:12.650 START TEST env_vtophys 00:05:12.650 ************************************ 00:05:12.650 21:55:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.650 EAL: lib.eal log level changed from notice to debug 00:05:12.650 EAL: Detected lcore 0 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 1 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 2 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 3 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 4 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 5 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 6 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 7 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 8 as core 0 on socket 0 00:05:12.650 EAL: Detected lcore 9 as core 0 on socket 0 00:05:12.650 EAL: Maximum logical cores by configuration: 128 00:05:12.650 EAL: Detected CPU lcores: 10 00:05:12.650 EAL: Detected NUMA nodes: 1 00:05:12.650 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:12.650 EAL: Detected shared linkage of DPDK 00:05:12.650 EAL: No shared files mode enabled, IPC will be disabled 00:05:12.650 EAL: Selected IOVA mode 'PA' 00:05:12.650 EAL: Probing VFIO support... 00:05:12.650 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.650 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:12.650 EAL: Ask a virtual area of 0x2e000 bytes 00:05:12.650 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:12.650 EAL: Setting up physically contiguous memory... 00:05:12.650 EAL: Setting maximum number of open files to 524288 00:05:12.650 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:12.650 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:12.650 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.650 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:12.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.650 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.650 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:12.650 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:12.650 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.650 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:12.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.650 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.650 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:12.650 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:12.650 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.650 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:12.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.650 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.650 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:12.650 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:12.650 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.650 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:12.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.650 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.650 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:12.650 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:12.650 EAL: Hugepages will be freed exactly as allocated. 00:05:12.650 EAL: No shared files mode enabled, IPC is disabled 00:05:12.650 EAL: No shared files mode enabled, IPC is disabled 00:05:12.650 EAL: TSC frequency is ~2600000 KHz 00:05:12.650 EAL: Main lcore 0 is ready (tid=7f6eb98d9a40;cpuset=[0]) 00:05:12.650 EAL: Trying to obtain current memory policy. 00:05:12.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.650 EAL: Restoring previous memory policy: 0 00:05:12.650 EAL: request: mp_malloc_sync 00:05:12.650 EAL: No shared files mode enabled, IPC is disabled 00:05:12.650 EAL: Heap on socket 0 was expanded by 2MB 00:05:12.650 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.650 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:12.650 EAL: Mem event callback 'spdk:(nil)' registered 00:05:12.650 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:12.650 00:05:12.650 00:05:12.650 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.650 http://cunit.sourceforge.net/ 00:05:12.650 00:05:12.650 00:05:12.650 Suite: components_suite 00:05:13.234 Test: vtophys_malloc_test ...passed 00:05:13.234 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.234 EAL: Restoring previous memory policy: 4 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.234 EAL: Trying to obtain current memory policy. 00:05:13.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.234 EAL: Restoring previous memory policy: 4 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.234 EAL: Trying to obtain current memory policy. 00:05:13.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.234 EAL: Restoring previous memory policy: 4 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.234 EAL: Trying to obtain current memory policy. 00:05:13.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.234 EAL: Restoring previous memory policy: 4 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.234 EAL: request: mp_malloc_sync 00:05:13.234 EAL: No shared files mode enabled, IPC is disabled 00:05:13.234 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.235 EAL: request: mp_malloc_sync 00:05:13.235 EAL: No shared files mode enabled, IPC is disabled 00:05:13.235 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.235 EAL: Trying to obtain current memory policy. 00:05:13.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.235 EAL: Restoring previous memory policy: 4 00:05:13.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.235 EAL: request: mp_malloc_sync 00:05:13.235 EAL: No shared files mode enabled, IPC is disabled 00:05:13.235 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.235 EAL: request: mp_malloc_sync 00:05:13.235 EAL: No shared files mode enabled, IPC is disabled 00:05:13.235 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.235 EAL: Trying to obtain current memory policy. 00:05:13.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.235 EAL: Restoring previous memory policy: 4 00:05:13.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.235 EAL: request: mp_malloc_sync 00:05:13.235 EAL: No shared files mode enabled, IPC is disabled 00:05:13.235 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.235 EAL: request: mp_malloc_sync 00:05:13.235 EAL: No shared files mode enabled, IPC is disabled 00:05:13.235 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.495 EAL: Trying to obtain current memory policy. 00:05:13.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.495 EAL: Restoring previous memory policy: 4 00:05:13.495 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.495 EAL: request: mp_malloc_sync 00:05:13.495 EAL: No shared files mode enabled, IPC is disabled 00:05:13.495 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.495 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.757 EAL: request: mp_malloc_sync 00:05:13.757 EAL: No shared files mode enabled, IPC is disabled 00:05:13.757 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.757 EAL: Trying to obtain current memory policy. 00:05:13.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.757 EAL: Restoring previous memory policy: 4 00:05:13.757 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.757 EAL: request: mp_malloc_sync 00:05:13.757 EAL: No shared files mode enabled, IPC is disabled 00:05:13.757 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.018 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.279 EAL: request: mp_malloc_sync 00:05:14.279 EAL: No shared files mode enabled, IPC is disabled 00:05:14.279 EAL: Heap on socket 0 was shrunk by 258MB 00:05:14.540 EAL: Trying to obtain current memory policy. 00:05:14.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.540 EAL: Restoring previous memory policy: 4 00:05:14.540 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.540 EAL: request: mp_malloc_sync 00:05:14.540 EAL: No shared files mode enabled, IPC is disabled 00:05:14.540 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.115 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.376 EAL: request: mp_malloc_sync 00:05:15.376 EAL: No shared files mode enabled, IPC is disabled 00:05:15.376 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.946 EAL: Trying to obtain current memory policy. 00:05:15.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.946 EAL: Restoring previous memory policy: 4 00:05:15.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.946 EAL: request: mp_malloc_sync 00:05:15.946 EAL: No shared files mode enabled, IPC is disabled 00:05:15.946 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.333 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.333 EAL: request: mp_malloc_sync 00:05:17.333 EAL: No shared files mode enabled, IPC is disabled 00:05:17.333 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.331 passed 00:05:18.331 00:05:18.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.331 suites 1 1 n/a 0 0 00:05:18.331 tests 2 2 2 0 0 00:05:18.331 asserts 5838 5838 5838 0 n/a 00:05:18.331 00:05:18.331 Elapsed time = 5.563 seconds 00:05:18.331 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.331 EAL: request: mp_malloc_sync 00:05:18.331 EAL: No shared files mode enabled, IPC is disabled 00:05:18.331 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.331 EAL: No shared files mode enabled, IPC is disabled 00:05:18.331 EAL: No shared files mode enabled, IPC is disabled 00:05:18.331 EAL: No shared files mode enabled, IPC is disabled 00:05:18.331 00:05:18.331 real 0m5.844s 00:05:18.331 user 0m4.790s 00:05:18.331 sys 0m0.895s 00:05:18.331 21:55:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.331 ************************************ 00:05:18.331 END TEST env_vtophys 00:05:18.331 ************************************ 00:05:18.331 21:55:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 21:55:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.331 21:55:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.331 21:55:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.331 21:55:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 ************************************ 00:05:18.331 START TEST env_pci 00:05:18.331 ************************************ 00:05:18.331 21:55:51 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.331 00:05:18.331 00:05:18.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.331 http://cunit.sourceforge.net/ 00:05:18.331 00:05:18.331 00:05:18.331 Suite: pci 00:05:18.331 Test: pci_hook ...[2024-12-06 21:55:51.172378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57000 has claimed it 00:05:18.331 passed 00:05:18.331 00:05:18.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.331 suites 1 1 n/a 0 0 00:05:18.331 tests 1 1 1 0 0 00:05:18.331 asserts 25 25 25 0 n/a 00:05:18.331 00:05:18.331 Elapsed time = 0.007 seconds 00:05:18.331 EAL: Cannot find device (10000:00:01.0) 00:05:18.331 EAL: Failed to attach device on primary process 00:05:18.592 00:05:18.592 real 0m0.069s 00:05:18.592 user 0m0.030s 00:05:18.592 sys 0m0.038s 00:05:18.592 21:55:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.592 ************************************ 00:05:18.592 END TEST env_pci 00:05:18.592 ************************************ 00:05:18.592 21:55:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:18.592 21:55:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.592 21:55:51 env -- env/env.sh@15 -- # uname 00:05:18.592 21:55:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.592 21:55:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.592 21:55:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.592 21:55:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:18.592 21:55:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.592 21:55:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.592 ************************************ 00:05:18.592 START TEST env_dpdk_post_init 00:05:18.592 ************************************ 00:05:18.592 21:55:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.592 EAL: Detected CPU lcores: 10 00:05:18.592 EAL: Detected NUMA nodes: 1 00:05:18.592 EAL: Detected shared linkage of DPDK 00:05:18.592 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.592 EAL: Selected IOVA mode 'PA' 00:05:18.592 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:18.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:18.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:18.852 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:18.852 Starting DPDK initialization... 00:05:18.852 Starting SPDK post initialization... 00:05:18.852 SPDK NVMe probe 00:05:18.852 Attaching to 0000:00:10.0 00:05:18.852 Attaching to 0000:00:11.0 00:05:18.852 Attaching to 0000:00:12.0 00:05:18.852 Attaching to 0000:00:13.0 00:05:18.852 Attached to 0000:00:10.0 00:05:18.852 Attached to 0000:00:11.0 00:05:18.852 Attached to 0000:00:13.0 00:05:18.852 Attached to 0000:00:12.0 00:05:18.852 Cleaning up... 00:05:18.852 00:05:18.852 real 0m0.260s 00:05:18.852 user 0m0.092s 00:05:18.852 sys 0m0.071s 00:05:18.852 21:55:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.852 ************************************ 00:05:18.852 END TEST env_dpdk_post_init 00:05:18.852 ************************************ 00:05:18.852 21:55:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.852 21:55:51 env -- env/env.sh@26 -- # uname 00:05:18.852 21:55:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.852 21:55:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.852 21:55:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.852 21:55:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.852 21:55:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.852 ************************************ 00:05:18.852 START TEST env_mem_callbacks 00:05:18.852 ************************************ 00:05:18.852 21:55:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.852 EAL: Detected CPU lcores: 10 00:05:18.852 EAL: Detected NUMA nodes: 1 00:05:18.852 EAL: Detected shared linkage of DPDK 00:05:18.852 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.852 EAL: Selected IOVA mode 'PA' 00:05:19.110 00:05:19.110 00:05:19.110 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.110 http://cunit.sourceforge.net/ 00:05:19.110 00:05:19.110 00:05:19.110 Suite: memory 00:05:19.110 Test: test ... 00:05:19.110 register 0x200000200000 2097152 00:05:19.110 malloc 3145728 00:05:19.110 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.110 register 0x200000400000 4194304 00:05:19.110 buf 0x2000004fffc0 len 3145728 PASSED 00:05:19.110 malloc 64 00:05:19.110 buf 0x2000004ffec0 len 64 PASSED 00:05:19.110 malloc 4194304 00:05:19.110 register 0x200000800000 6291456 00:05:19.110 buf 0x2000009fffc0 len 4194304 PASSED 00:05:19.111 free 0x2000004fffc0 3145728 00:05:19.111 free 0x2000004ffec0 64 00:05:19.111 unregister 0x200000400000 4194304 PASSED 00:05:19.111 free 0x2000009fffc0 4194304 00:05:19.111 unregister 0x200000800000 6291456 PASSED 00:05:19.111 malloc 8388608 00:05:19.111 register 0x200000400000 10485760 00:05:19.111 buf 0x2000005fffc0 len 8388608 PASSED 00:05:19.111 free 0x2000005fffc0 8388608 00:05:19.111 unregister 0x200000400000 10485760 PASSED 00:05:19.111 passed 00:05:19.111 00:05:19.111 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.111 suites 1 1 n/a 0 0 00:05:19.111 tests 1 1 1 0 0 00:05:19.111 asserts 15 15 15 0 n/a 00:05:19.111 00:05:19.111 Elapsed time = 0.051 seconds 00:05:19.111 00:05:19.111 real 0m0.230s 00:05:19.111 user 0m0.068s 00:05:19.111 sys 0m0.059s 00:05:19.111 ************************************ 00:05:19.111 END TEST env_mem_callbacks 00:05:19.111 ************************************ 00:05:19.111 21:55:51 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.111 21:55:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:19.111 00:05:19.111 real 0m7.170s 00:05:19.111 user 0m5.381s 00:05:19.111 sys 0m1.322s 00:05:19.111 ************************************ 00:05:19.111 END TEST env 00:05:19.111 ************************************ 00:05:19.111 21:55:51 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.111 21:55:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.111 21:55:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:19.111 21:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.111 21:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.111 21:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:19.111 ************************************ 00:05:19.111 START TEST rpc 00:05:19.111 ************************************ 00:05:19.111 21:55:51 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:19.371 * Looking for test storage... 00:05:19.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.371 21:55:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.371 21:55:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.371 21:55:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.371 21:55:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.371 21:55:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.371 21:55:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.371 21:55:52 rpc -- scripts/common.sh@345 -- # : 1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.371 21:55:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.371 21:55:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.371 21:55:52 rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.371 21:55:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.371 21:55:52 rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.371 21:55:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.371 21:55:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.371 21:55:52 rpc -- scripts/common.sh@368 -- # return 0 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.371 --rc genhtml_branch_coverage=1 00:05:19.371 --rc genhtml_function_coverage=1 00:05:19.371 --rc genhtml_legend=1 00:05:19.371 --rc geninfo_all_blocks=1 00:05:19.371 --rc geninfo_unexecuted_blocks=1 00:05:19.371 00:05:19.371 ' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.371 --rc genhtml_branch_coverage=1 00:05:19.371 --rc genhtml_function_coverage=1 00:05:19.371 --rc genhtml_legend=1 00:05:19.371 --rc geninfo_all_blocks=1 00:05:19.371 --rc geninfo_unexecuted_blocks=1 00:05:19.371 00:05:19.371 ' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.371 --rc genhtml_branch_coverage=1 00:05:19.371 --rc genhtml_function_coverage=1 00:05:19.371 --rc genhtml_legend=1 00:05:19.371 --rc geninfo_all_blocks=1 00:05:19.371 --rc geninfo_unexecuted_blocks=1 00:05:19.371 00:05:19.371 ' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.371 --rc genhtml_branch_coverage=1 00:05:19.371 --rc genhtml_function_coverage=1 00:05:19.371 --rc genhtml_legend=1 00:05:19.371 --rc geninfo_all_blocks=1 00:05:19.371 --rc geninfo_unexecuted_blocks=1 00:05:19.371 00:05:19.371 ' 00:05:19.371 21:55:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57127 00:05:19.371 21:55:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.371 21:55:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57127 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 57127 ']' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.371 21:55:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.371 21:55:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:19.371 [2024-12-06 21:55:52.212696] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:19.371 [2024-12-06 21:55:52.212860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57127 ] 00:05:19.630 [2024-12-06 21:55:52.379370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.890 [2024-12-06 21:55:52.516022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.890 [2024-12-06 21:55:52.516105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57127' to capture a snapshot of events at runtime. 00:05:19.890 [2024-12-06 21:55:52.516118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.890 [2024-12-06 21:55:52.516130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.890 [2024-12-06 21:55:52.516139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57127 for offline analysis/debug. 00:05:19.890 [2024-12-06 21:55:52.517121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.460 21:55:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.460 21:55:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.460 21:55:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.460 21:55:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.460 21:55:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:20.460 21:55:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:20.460 21:55:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.460 21:55:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.460 21:55:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.460 ************************************ 00:05:20.460 START TEST rpc_integrity 00:05:20.460 ************************************ 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.460 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.460 { 00:05:20.460 "name": "Malloc0", 00:05:20.460 "aliases": [ 00:05:20.460 "881c76dc-5962-487a-be6e-e0abceb654fe" 00:05:20.460 ], 00:05:20.460 "product_name": "Malloc disk", 00:05:20.460 "block_size": 512, 00:05:20.460 "num_blocks": 16384, 00:05:20.460 "uuid": "881c76dc-5962-487a-be6e-e0abceb654fe", 00:05:20.460 "assigned_rate_limits": { 00:05:20.460 "rw_ios_per_sec": 0, 00:05:20.460 "rw_mbytes_per_sec": 0, 00:05:20.460 "r_mbytes_per_sec": 0, 00:05:20.460 "w_mbytes_per_sec": 0 00:05:20.460 }, 00:05:20.460 "claimed": false, 00:05:20.460 "zoned": false, 00:05:20.460 "supported_io_types": { 00:05:20.460 "read": true, 00:05:20.460 "write": true, 00:05:20.460 "unmap": true, 00:05:20.460 "flush": true, 00:05:20.460 "reset": true, 00:05:20.460 "nvme_admin": false, 00:05:20.460 "nvme_io": false, 00:05:20.460 "nvme_io_md": false, 00:05:20.460 "write_zeroes": true, 00:05:20.460 "zcopy": true, 00:05:20.460 "get_zone_info": false, 00:05:20.460 "zone_management": false, 00:05:20.460 "zone_append": false, 00:05:20.460 "compare": false, 00:05:20.460 "compare_and_write": false, 00:05:20.460 "abort": true, 00:05:20.460 "seek_hole": false, 00:05:20.460 "seek_data": false, 00:05:20.460 "copy": true, 00:05:20.460 "nvme_iov_md": false 00:05:20.460 }, 00:05:20.460 "memory_domains": [ 00:05:20.460 { 00:05:20.460 "dma_device_id": "system", 00:05:20.460 "dma_device_type": 1 00:05:20.460 }, 00:05:20.460 { 00:05:20.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.460 "dma_device_type": 2 00:05:20.460 } 00:05:20.460 ], 00:05:20.460 "driver_specific": {} 00:05:20.460 } 00:05:20.460 ]' 00:05:20.460 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.721 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.721 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 [2024-12-06 21:55:53.347320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:20.721 [2024-12-06 21:55:53.347390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.721 [2024-12-06 21:55:53.347419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:20.721 [2024-12-06 21:55:53.347431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.721 [2024-12-06 21:55:53.349747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.721 [2024-12-06 21:55:53.349794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.721 Passthru0 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.721 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.721 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.721 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.721 { 00:05:20.721 "name": "Malloc0", 00:05:20.721 "aliases": [ 00:05:20.721 "881c76dc-5962-487a-be6e-e0abceb654fe" 00:05:20.721 ], 00:05:20.721 "product_name": "Malloc disk", 00:05:20.721 "block_size": 512, 00:05:20.721 "num_blocks": 16384, 00:05:20.721 "uuid": "881c76dc-5962-487a-be6e-e0abceb654fe", 00:05:20.721 "assigned_rate_limits": { 00:05:20.721 "rw_ios_per_sec": 0, 00:05:20.721 "rw_mbytes_per_sec": 0, 00:05:20.721 "r_mbytes_per_sec": 0, 00:05:20.721 "w_mbytes_per_sec": 0 00:05:20.721 }, 00:05:20.721 "claimed": true, 00:05:20.721 "claim_type": "exclusive_write", 00:05:20.721 "zoned": false, 00:05:20.721 "supported_io_types": { 00:05:20.721 "read": true, 00:05:20.721 "write": true, 00:05:20.721 "unmap": true, 00:05:20.721 "flush": true, 00:05:20.721 "reset": true, 00:05:20.721 "nvme_admin": false, 00:05:20.721 "nvme_io": false, 00:05:20.721 "nvme_io_md": false, 00:05:20.721 "write_zeroes": true, 00:05:20.721 "zcopy": true, 00:05:20.721 "get_zone_info": false, 00:05:20.721 "zone_management": false, 00:05:20.721 "zone_append": false, 00:05:20.721 "compare": false, 00:05:20.721 "compare_and_write": false, 00:05:20.721 "abort": true, 00:05:20.721 "seek_hole": false, 00:05:20.721 "seek_data": false, 00:05:20.721 "copy": true, 00:05:20.721 "nvme_iov_md": false 00:05:20.721 }, 00:05:20.721 "memory_domains": [ 00:05:20.721 { 00:05:20.721 "dma_device_id": "system", 00:05:20.721 "dma_device_type": 1 00:05:20.721 }, 00:05:20.721 { 00:05:20.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.721 "dma_device_type": 2 00:05:20.721 } 00:05:20.721 ], 00:05:20.721 "driver_specific": {} 00:05:20.721 }, 00:05:20.721 { 00:05:20.721 "name": "Passthru0", 00:05:20.721 "aliases": [ 00:05:20.721 "5f967325-72f1-576d-aaa8-a9cf5fc6192f" 00:05:20.721 ], 00:05:20.721 "product_name": "passthru", 00:05:20.721 "block_size": 512, 00:05:20.721 "num_blocks": 16384, 00:05:20.721 "uuid": "5f967325-72f1-576d-aaa8-a9cf5fc6192f", 00:05:20.721 "assigned_rate_limits": { 00:05:20.721 "rw_ios_per_sec": 0, 00:05:20.721 "rw_mbytes_per_sec": 0, 00:05:20.721 "r_mbytes_per_sec": 0, 00:05:20.721 "w_mbytes_per_sec": 0 00:05:20.721 }, 00:05:20.721 "claimed": false, 00:05:20.721 "zoned": false, 00:05:20.721 "supported_io_types": { 00:05:20.721 "read": true, 00:05:20.721 "write": true, 00:05:20.721 "unmap": true, 00:05:20.721 "flush": true, 00:05:20.721 "reset": true, 00:05:20.722 "nvme_admin": false, 00:05:20.722 "nvme_io": false, 00:05:20.722 "nvme_io_md": false, 00:05:20.722 "write_zeroes": true, 00:05:20.722 "zcopy": true, 00:05:20.722 "get_zone_info": false, 00:05:20.722 "zone_management": false, 00:05:20.722 "zone_append": false, 00:05:20.722 "compare": false, 00:05:20.722 "compare_and_write": false, 00:05:20.722 "abort": true, 00:05:20.722 "seek_hole": false, 00:05:20.722 "seek_data": false, 00:05:20.722 "copy": true, 00:05:20.722 "nvme_iov_md": false 00:05:20.722 }, 00:05:20.722 "memory_domains": [ 00:05:20.722 { 00:05:20.722 "dma_device_id": "system", 00:05:20.722 "dma_device_type": 1 00:05:20.722 }, 00:05:20.722 { 00:05:20.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.722 "dma_device_type": 2 00:05:20.722 } 00:05:20.722 ], 00:05:20.722 "driver_specific": { 00:05:20.722 "passthru": { 00:05:20.722 "name": "Passthru0", 00:05:20.722 "base_bdev_name": "Malloc0" 00:05:20.722 } 00:05:20.722 } 00:05:20.722 } 00:05:20.722 ]' 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.722 21:55:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.722 00:05:20.722 real 0m0.246s 00:05:20.722 user 0m0.128s 00:05:20.722 sys 0m0.032s 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.722 ************************************ 00:05:20.722 END TEST rpc_integrity 00:05:20.722 ************************************ 00:05:20.722 21:55:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.722 21:55:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.722 21:55:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.722 21:55:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 ************************************ 00:05:20.722 START TEST rpc_plugins 00:05:20.722 ************************************ 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:20.722 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.722 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.722 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.722 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.722 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.722 { 00:05:20.722 "name": "Malloc1", 00:05:20.722 "aliases": [ 00:05:20.722 "dd1ef3c7-89f6-4473-89e5-b085fa5f9c6e" 00:05:20.722 ], 00:05:20.722 "product_name": "Malloc disk", 00:05:20.722 "block_size": 4096, 00:05:20.722 "num_blocks": 256, 00:05:20.722 "uuid": "dd1ef3c7-89f6-4473-89e5-b085fa5f9c6e", 00:05:20.722 "assigned_rate_limits": { 00:05:20.722 "rw_ios_per_sec": 0, 00:05:20.722 "rw_mbytes_per_sec": 0, 00:05:20.722 "r_mbytes_per_sec": 0, 00:05:20.722 "w_mbytes_per_sec": 0 00:05:20.722 }, 00:05:20.722 "claimed": false, 00:05:20.722 "zoned": false, 00:05:20.722 "supported_io_types": { 00:05:20.722 "read": true, 00:05:20.722 "write": true, 00:05:20.722 "unmap": true, 00:05:20.722 "flush": true, 00:05:20.722 "reset": true, 00:05:20.722 "nvme_admin": false, 00:05:20.722 "nvme_io": false, 00:05:20.722 "nvme_io_md": false, 00:05:20.722 "write_zeroes": true, 00:05:20.722 "zcopy": true, 00:05:20.722 "get_zone_info": false, 00:05:20.722 "zone_management": false, 00:05:20.722 "zone_append": false, 00:05:20.722 "compare": false, 00:05:20.722 "compare_and_write": false, 00:05:20.722 "abort": true, 00:05:20.722 "seek_hole": false, 00:05:20.722 "seek_data": false, 00:05:20.722 "copy": true, 00:05:20.722 "nvme_iov_md": false 00:05:20.722 }, 00:05:20.722 "memory_domains": [ 00:05:20.722 { 00:05:20.722 "dma_device_id": "system", 00:05:20.722 "dma_device_type": 1 00:05:20.722 }, 00:05:20.722 { 00:05:20.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.722 "dma_device_type": 2 00:05:20.722 } 00:05:20.722 ], 00:05:20.722 "driver_specific": {} 00:05:20.722 } 00:05:20.722 ]' 00:05:20.722 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.983 21:55:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.983 00:05:20.983 real 0m0.123s 00:05:20.983 user 0m0.065s 00:05:20.983 sys 0m0.016s 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.983 ************************************ 00:05:20.983 END TEST rpc_plugins 00:05:20.983 ************************************ 00:05:20.983 21:55:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.983 21:55:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.983 21:55:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.983 21:55:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.983 21:55:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.983 ************************************ 00:05:20.983 START TEST rpc_trace_cmd_test 00:05:20.983 ************************************ 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.983 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57127", 00:05:20.983 "tpoint_group_mask": "0x8", 00:05:20.983 "iscsi_conn": { 00:05:20.983 "mask": "0x2", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "scsi": { 00:05:20.983 "mask": "0x4", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "bdev": { 00:05:20.983 "mask": "0x8", 00:05:20.983 "tpoint_mask": "0xffffffffffffffff" 00:05:20.983 }, 00:05:20.983 "nvmf_rdma": { 00:05:20.983 "mask": "0x10", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "nvmf_tcp": { 00:05:20.983 "mask": "0x20", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "ftl": { 00:05:20.983 "mask": "0x40", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "blobfs": { 00:05:20.983 "mask": "0x80", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "dsa": { 00:05:20.983 "mask": "0x200", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "thread": { 00:05:20.983 "mask": "0x400", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "nvme_pcie": { 00:05:20.983 "mask": "0x800", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "iaa": { 00:05:20.983 "mask": "0x1000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "nvme_tcp": { 00:05:20.983 "mask": "0x2000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "bdev_nvme": { 00:05:20.983 "mask": "0x4000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "sock": { 00:05:20.983 "mask": "0x8000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "blob": { 00:05:20.983 "mask": "0x10000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "bdev_raid": { 00:05:20.983 "mask": "0x20000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 }, 00:05:20.983 "scheduler": { 00:05:20.983 "mask": "0x40000", 00:05:20.983 "tpoint_mask": "0x0" 00:05:20.983 } 00:05:20.983 }' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.983 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:21.245 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:21.245 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:21.245 ************************************ 00:05:21.245 END TEST rpc_trace_cmd_test 00:05:21.245 ************************************ 00:05:21.245 21:55:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:21.245 00:05:21.245 real 0m0.169s 00:05:21.245 user 0m0.132s 00:05:21.245 sys 0m0.026s 00:05:21.245 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.245 21:55:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.245 21:55:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:21.245 21:55:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:21.245 21:55:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:21.245 21:55:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.245 21:55:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.245 21:55:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.245 ************************************ 00:05:21.245 START TEST rpc_daemon_integrity 00:05:21.245 ************************************ 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.245 21:55:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.246 { 00:05:21.246 "name": "Malloc2", 00:05:21.246 "aliases": [ 00:05:21.246 "355a38df-c167-48d8-a0e1-b9639b4312cb" 00:05:21.246 ], 00:05:21.246 "product_name": "Malloc disk", 00:05:21.246 "block_size": 512, 00:05:21.246 "num_blocks": 16384, 00:05:21.246 "uuid": "355a38df-c167-48d8-a0e1-b9639b4312cb", 00:05:21.246 "assigned_rate_limits": { 00:05:21.246 "rw_ios_per_sec": 0, 00:05:21.246 "rw_mbytes_per_sec": 0, 00:05:21.246 "r_mbytes_per_sec": 0, 00:05:21.246 "w_mbytes_per_sec": 0 00:05:21.246 }, 00:05:21.246 "claimed": false, 00:05:21.246 "zoned": false, 00:05:21.246 "supported_io_types": { 00:05:21.246 "read": true, 00:05:21.246 "write": true, 00:05:21.246 "unmap": true, 00:05:21.246 "flush": true, 00:05:21.246 "reset": true, 00:05:21.246 "nvme_admin": false, 00:05:21.246 "nvme_io": false, 00:05:21.246 "nvme_io_md": false, 00:05:21.246 "write_zeroes": true, 00:05:21.246 "zcopy": true, 00:05:21.246 "get_zone_info": false, 00:05:21.246 "zone_management": false, 00:05:21.246 "zone_append": false, 00:05:21.246 "compare": false, 00:05:21.246 "compare_and_write": false, 00:05:21.246 "abort": true, 00:05:21.246 "seek_hole": false, 00:05:21.246 "seek_data": false, 00:05:21.246 "copy": true, 00:05:21.246 "nvme_iov_md": false 00:05:21.246 }, 00:05:21.246 "memory_domains": [ 00:05:21.246 { 00:05:21.246 "dma_device_id": "system", 00:05:21.246 "dma_device_type": 1 00:05:21.246 }, 00:05:21.246 { 00:05:21.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.246 "dma_device_type": 2 00:05:21.246 } 00:05:21.246 ], 00:05:21.246 "driver_specific": {} 00:05:21.246 } 00:05:21.246 ]' 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 [2024-12-06 21:55:54.082517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:21.246 [2024-12-06 21:55:54.082592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.246 [2024-12-06 21:55:54.082614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:21.246 [2024-12-06 21:55:54.082626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.246 [2024-12-06 21:55:54.085117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.246 [2024-12-06 21:55:54.085190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.246 Passthru0 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.246 { 00:05:21.246 "name": "Malloc2", 00:05:21.246 "aliases": [ 00:05:21.246 "355a38df-c167-48d8-a0e1-b9639b4312cb" 00:05:21.246 ], 00:05:21.246 "product_name": "Malloc disk", 00:05:21.246 "block_size": 512, 00:05:21.246 "num_blocks": 16384, 00:05:21.246 "uuid": "355a38df-c167-48d8-a0e1-b9639b4312cb", 00:05:21.246 "assigned_rate_limits": { 00:05:21.246 "rw_ios_per_sec": 0, 00:05:21.246 "rw_mbytes_per_sec": 0, 00:05:21.246 "r_mbytes_per_sec": 0, 00:05:21.246 "w_mbytes_per_sec": 0 00:05:21.246 }, 00:05:21.246 "claimed": true, 00:05:21.246 "claim_type": "exclusive_write", 00:05:21.246 "zoned": false, 00:05:21.246 "supported_io_types": { 00:05:21.246 "read": true, 00:05:21.246 "write": true, 00:05:21.246 "unmap": true, 00:05:21.246 "flush": true, 00:05:21.246 "reset": true, 00:05:21.246 "nvme_admin": false, 00:05:21.246 "nvme_io": false, 00:05:21.246 "nvme_io_md": false, 00:05:21.246 "write_zeroes": true, 00:05:21.246 "zcopy": true, 00:05:21.246 "get_zone_info": false, 00:05:21.246 "zone_management": false, 00:05:21.246 "zone_append": false, 00:05:21.246 "compare": false, 00:05:21.246 "compare_and_write": false, 00:05:21.246 "abort": true, 00:05:21.246 "seek_hole": false, 00:05:21.246 "seek_data": false, 00:05:21.246 "copy": true, 00:05:21.246 "nvme_iov_md": false 00:05:21.246 }, 00:05:21.246 "memory_domains": [ 00:05:21.246 { 00:05:21.246 "dma_device_id": "system", 00:05:21.246 "dma_device_type": 1 00:05:21.246 }, 00:05:21.246 { 00:05:21.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.246 "dma_device_type": 2 00:05:21.246 } 00:05:21.246 ], 00:05:21.246 "driver_specific": {} 00:05:21.246 }, 00:05:21.246 { 00:05:21.246 "name": "Passthru0", 00:05:21.246 "aliases": [ 00:05:21.246 "b8f0fcc7-544f-55e9-9841-e801e66affe4" 00:05:21.246 ], 00:05:21.246 "product_name": "passthru", 00:05:21.246 "block_size": 512, 00:05:21.246 "num_blocks": 16384, 00:05:21.246 "uuid": "b8f0fcc7-544f-55e9-9841-e801e66affe4", 00:05:21.246 "assigned_rate_limits": { 00:05:21.246 "rw_ios_per_sec": 0, 00:05:21.246 "rw_mbytes_per_sec": 0, 00:05:21.246 "r_mbytes_per_sec": 0, 00:05:21.246 "w_mbytes_per_sec": 0 00:05:21.246 }, 00:05:21.246 "claimed": false, 00:05:21.246 "zoned": false, 00:05:21.246 "supported_io_types": { 00:05:21.246 "read": true, 00:05:21.246 "write": true, 00:05:21.246 "unmap": true, 00:05:21.246 "flush": true, 00:05:21.246 "reset": true, 00:05:21.246 "nvme_admin": false, 00:05:21.246 "nvme_io": false, 00:05:21.246 "nvme_io_md": false, 00:05:21.246 "write_zeroes": true, 00:05:21.246 "zcopy": true, 00:05:21.246 "get_zone_info": false, 00:05:21.246 "zone_management": false, 00:05:21.246 "zone_append": false, 00:05:21.246 "compare": false, 00:05:21.246 "compare_and_write": false, 00:05:21.246 "abort": true, 00:05:21.246 "seek_hole": false, 00:05:21.246 "seek_data": false, 00:05:21.246 "copy": true, 00:05:21.246 "nvme_iov_md": false 00:05:21.246 }, 00:05:21.246 "memory_domains": [ 00:05:21.246 { 00:05:21.246 "dma_device_id": "system", 00:05:21.246 "dma_device_type": 1 00:05:21.246 }, 00:05:21.246 { 00:05:21.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.246 "dma_device_type": 2 00:05:21.246 } 00:05:21.246 ], 00:05:21.246 "driver_specific": { 00:05:21.246 "passthru": { 00:05:21.246 "name": "Passthru0", 00:05:21.246 "base_bdev_name": "Malloc2" 00:05:21.246 } 00:05:21.246 } 00:05:21.246 } 00:05:21.246 ]' 00:05:21.246 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.506 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.507 00:05:21.507 real 0m0.248s 00:05:21.507 user 0m0.131s 00:05:21.507 sys 0m0.031s 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.507 ************************************ 00:05:21.507 END TEST rpc_daemon_integrity 00:05:21.507 ************************************ 00:05:21.507 21:55:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.507 21:55:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.507 21:55:54 rpc -- rpc/rpc.sh@84 -- # killprocess 57127 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 57127 ']' 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@958 -- # kill -0 57127 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57127 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.507 killing process with pid 57127 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57127' 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@973 -- # kill 57127 00:05:21.507 21:55:54 rpc -- common/autotest_common.sh@978 -- # wait 57127 00:05:23.417 00:05:23.417 real 0m3.892s 00:05:23.417 user 0m4.193s 00:05:23.417 sys 0m0.753s 00:05:23.417 21:55:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.417 ************************************ 00:05:23.417 21:55:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.417 END TEST rpc 00:05:23.417 ************************************ 00:05:23.417 21:55:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:23.417 21:55:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.417 21:55:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.417 21:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:23.417 ************************************ 00:05:23.417 START TEST skip_rpc 00:05:23.417 ************************************ 00:05:23.417 21:55:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:23.417 * Looking for test storage... 00:05:23.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.417 21:55:55 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.417 21:55:55 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.417 21:55:55 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.417 21:55:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.417 --rc genhtml_branch_coverage=1 00:05:23.417 --rc genhtml_function_coverage=1 00:05:23.417 --rc genhtml_legend=1 00:05:23.417 --rc geninfo_all_blocks=1 00:05:23.417 --rc geninfo_unexecuted_blocks=1 00:05:23.417 00:05:23.417 ' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.417 --rc genhtml_branch_coverage=1 00:05:23.417 --rc genhtml_function_coverage=1 00:05:23.417 --rc genhtml_legend=1 00:05:23.417 --rc geninfo_all_blocks=1 00:05:23.417 --rc geninfo_unexecuted_blocks=1 00:05:23.417 00:05:23.417 ' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.417 --rc genhtml_branch_coverage=1 00:05:23.417 --rc genhtml_function_coverage=1 00:05:23.417 --rc genhtml_legend=1 00:05:23.417 --rc geninfo_all_blocks=1 00:05:23.417 --rc geninfo_unexecuted_blocks=1 00:05:23.417 00:05:23.417 ' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.417 --rc genhtml_branch_coverage=1 00:05:23.417 --rc genhtml_function_coverage=1 00:05:23.417 --rc genhtml_legend=1 00:05:23.417 --rc geninfo_all_blocks=1 00:05:23.417 --rc geninfo_unexecuted_blocks=1 00:05:23.417 00:05:23.417 ' 00:05:23.417 21:55:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.417 21:55:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.417 21:55:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.417 21:55:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.417 ************************************ 00:05:23.417 START TEST skip_rpc 00:05:23.417 ************************************ 00:05:23.417 21:55:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:23.417 21:55:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57345 00:05:23.417 21:55:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.417 21:55:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.417 21:55:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.417 [2024-12-06 21:55:56.113554] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:23.417 [2024-12-06 21:55:56.113968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57345 ] 00:05:23.417 [2024-12-06 21:55:56.274626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.676 [2024-12-06 21:55:56.370227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57345 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57345 ']' 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57345 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57345 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.962 killing process with pid 57345 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57345' 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57345 00:05:28.962 21:56:01 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57345 00:05:29.532 00:05:29.532 real 0m6.220s 00:05:29.532 user 0m5.865s 00:05:29.532 sys 0m0.257s 00:05:29.532 21:56:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.532 ************************************ 00:05:29.532 END TEST skip_rpc 00:05:29.532 ************************************ 00:05:29.532 21:56:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.532 21:56:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:29.532 21:56:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.532 21:56:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.532 21:56:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.532 ************************************ 00:05:29.532 START TEST skip_rpc_with_json 00:05:29.532 ************************************ 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57438 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57438 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57438 ']' 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.532 21:56:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.532 [2024-12-06 21:56:02.381020] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:29.532 [2024-12-06 21:56:02.381139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57438 ] 00:05:29.794 [2024-12-06 21:56:02.535410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.794 [2024-12-06 21:56:02.617784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.366 [2024-12-06 21:56:03.211153] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:30.366 request: 00:05:30.366 { 00:05:30.366 "trtype": "tcp", 00:05:30.366 "method": "nvmf_get_transports", 00:05:30.366 "req_id": 1 00:05:30.366 } 00:05:30.366 Got JSON-RPC error response 00:05:30.366 response: 00:05:30.366 { 00:05:30.366 "code": -19, 00:05:30.366 "message": "No such device" 00:05:30.366 } 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.366 [2024-12-06 21:56:03.223284] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:30.366 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.367 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.628 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.628 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.628 { 00:05:30.628 "subsystems": [ 00:05:30.628 { 00:05:30.628 "subsystem": "fsdev", 00:05:30.628 "config": [ 00:05:30.628 { 00:05:30.628 "method": "fsdev_set_opts", 00:05:30.628 "params": { 00:05:30.628 "fsdev_io_pool_size": 65535, 00:05:30.628 "fsdev_io_cache_size": 256 00:05:30.628 } 00:05:30.628 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "keyring", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "iobuf", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "iobuf_set_options", 00:05:30.629 "params": { 00:05:30.629 "small_pool_count": 8192, 00:05:30.629 "large_pool_count": 1024, 00:05:30.629 "small_bufsize": 8192, 00:05:30.629 "large_bufsize": 135168, 00:05:30.629 "enable_numa": false 00:05:30.629 } 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "sock", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "sock_set_default_impl", 00:05:30.629 "params": { 00:05:30.629 "impl_name": "posix" 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "sock_impl_set_options", 00:05:30.629 "params": { 00:05:30.629 "impl_name": "ssl", 00:05:30.629 "recv_buf_size": 4096, 00:05:30.629 "send_buf_size": 4096, 00:05:30.629 "enable_recv_pipe": true, 00:05:30.629 "enable_quickack": false, 00:05:30.629 "enable_placement_id": 0, 00:05:30.629 "enable_zerocopy_send_server": true, 00:05:30.629 "enable_zerocopy_send_client": false, 00:05:30.629 "zerocopy_threshold": 0, 00:05:30.629 "tls_version": 0, 00:05:30.629 "enable_ktls": false 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "sock_impl_set_options", 00:05:30.629 "params": { 00:05:30.629 "impl_name": "posix", 00:05:30.629 "recv_buf_size": 2097152, 00:05:30.629 "send_buf_size": 2097152, 00:05:30.629 "enable_recv_pipe": true, 00:05:30.629 "enable_quickack": false, 00:05:30.629 "enable_placement_id": 0, 00:05:30.629 "enable_zerocopy_send_server": true, 00:05:30.629 "enable_zerocopy_send_client": false, 00:05:30.629 "zerocopy_threshold": 0, 00:05:30.629 "tls_version": 0, 00:05:30.629 "enable_ktls": false 00:05:30.629 } 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "vmd", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "accel", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "accel_set_options", 00:05:30.629 "params": { 00:05:30.629 "small_cache_size": 128, 00:05:30.629 "large_cache_size": 16, 00:05:30.629 "task_count": 2048, 00:05:30.629 "sequence_count": 2048, 00:05:30.629 "buf_count": 2048 00:05:30.629 } 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "bdev", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "bdev_set_options", 00:05:30.629 "params": { 00:05:30.629 "bdev_io_pool_size": 65535, 00:05:30.629 "bdev_io_cache_size": 256, 00:05:30.629 "bdev_auto_examine": true, 00:05:30.629 "iobuf_small_cache_size": 128, 00:05:30.629 "iobuf_large_cache_size": 16 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "bdev_raid_set_options", 00:05:30.629 "params": { 00:05:30.629 "process_window_size_kb": 1024, 00:05:30.629 "process_max_bandwidth_mb_sec": 0 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "bdev_iscsi_set_options", 00:05:30.629 "params": { 00:05:30.629 "timeout_sec": 30 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "bdev_nvme_set_options", 00:05:30.629 "params": { 00:05:30.629 "action_on_timeout": "none", 00:05:30.629 "timeout_us": 0, 00:05:30.629 "timeout_admin_us": 0, 00:05:30.629 "keep_alive_timeout_ms": 10000, 00:05:30.629 "arbitration_burst": 0, 00:05:30.629 "low_priority_weight": 0, 00:05:30.629 "medium_priority_weight": 0, 00:05:30.629 "high_priority_weight": 0, 00:05:30.629 "nvme_adminq_poll_period_us": 10000, 00:05:30.629 "nvme_ioq_poll_period_us": 0, 00:05:30.629 "io_queue_requests": 0, 00:05:30.629 "delay_cmd_submit": true, 00:05:30.629 "transport_retry_count": 4, 00:05:30.629 "bdev_retry_count": 3, 00:05:30.629 "transport_ack_timeout": 0, 00:05:30.629 "ctrlr_loss_timeout_sec": 0, 00:05:30.629 "reconnect_delay_sec": 0, 00:05:30.629 "fast_io_fail_timeout_sec": 0, 00:05:30.629 "disable_auto_failback": false, 00:05:30.629 "generate_uuids": false, 00:05:30.629 "transport_tos": 0, 00:05:30.629 "nvme_error_stat": false, 00:05:30.629 "rdma_srq_size": 0, 00:05:30.629 "io_path_stat": false, 00:05:30.629 "allow_accel_sequence": false, 00:05:30.629 "rdma_max_cq_size": 0, 00:05:30.629 "rdma_cm_event_timeout_ms": 0, 00:05:30.629 "dhchap_digests": [ 00:05:30.629 "sha256", 00:05:30.629 "sha384", 00:05:30.629 "sha512" 00:05:30.629 ], 00:05:30.629 "dhchap_dhgroups": [ 00:05:30.629 "null", 00:05:30.629 "ffdhe2048", 00:05:30.629 "ffdhe3072", 00:05:30.629 "ffdhe4096", 00:05:30.629 "ffdhe6144", 00:05:30.629 "ffdhe8192" 00:05:30.629 ] 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "bdev_nvme_set_hotplug", 00:05:30.629 "params": { 00:05:30.629 "period_us": 100000, 00:05:30.629 "enable": false 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "bdev_wait_for_examine" 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "scsi", 00:05:30.629 "config": null 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "scheduler", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "framework_set_scheduler", 00:05:30.629 "params": { 00:05:30.629 "name": "static" 00:05:30.629 } 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "vhost_scsi", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "vhost_blk", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "ublk", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "nbd", 00:05:30.629 "config": [] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "nvmf", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "nvmf_set_config", 00:05:30.629 "params": { 00:05:30.629 "discovery_filter": "match_any", 00:05:30.629 "admin_cmd_passthru": { 00:05:30.629 "identify_ctrlr": false 00:05:30.629 }, 00:05:30.629 "dhchap_digests": [ 00:05:30.629 "sha256", 00:05:30.629 "sha384", 00:05:30.629 "sha512" 00:05:30.629 ], 00:05:30.629 "dhchap_dhgroups": [ 00:05:30.629 "null", 00:05:30.629 "ffdhe2048", 00:05:30.629 "ffdhe3072", 00:05:30.629 "ffdhe4096", 00:05:30.629 "ffdhe6144", 00:05:30.629 "ffdhe8192" 00:05:30.629 ] 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "nvmf_set_max_subsystems", 00:05:30.629 "params": { 00:05:30.629 "max_subsystems": 1024 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "nvmf_set_crdt", 00:05:30.629 "params": { 00:05:30.629 "crdt1": 0, 00:05:30.629 "crdt2": 0, 00:05:30.629 "crdt3": 0 00:05:30.629 } 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "method": "nvmf_create_transport", 00:05:30.629 "params": { 00:05:30.629 "trtype": "TCP", 00:05:30.629 "max_queue_depth": 128, 00:05:30.629 "max_io_qpairs_per_ctrlr": 127, 00:05:30.629 "in_capsule_data_size": 4096, 00:05:30.629 "max_io_size": 131072, 00:05:30.629 "io_unit_size": 131072, 00:05:30.629 "max_aq_depth": 128, 00:05:30.629 "num_shared_buffers": 511, 00:05:30.629 "buf_cache_size": 4294967295, 00:05:30.629 "dif_insert_or_strip": false, 00:05:30.629 "zcopy": false, 00:05:30.629 "c2h_success": true, 00:05:30.629 "sock_priority": 0, 00:05:30.629 "abort_timeout_sec": 1, 00:05:30.629 "ack_timeout": 0, 00:05:30.629 "data_wr_pool_size": 0 00:05:30.629 } 00:05:30.629 } 00:05:30.629 ] 00:05:30.629 }, 00:05:30.629 { 00:05:30.629 "subsystem": "iscsi", 00:05:30.629 "config": [ 00:05:30.629 { 00:05:30.629 "method": "iscsi_set_options", 00:05:30.629 "params": { 00:05:30.629 "node_base": "iqn.2016-06.io.spdk", 00:05:30.629 "max_sessions": 128, 00:05:30.629 "max_connections_per_session": 2, 00:05:30.629 "max_queue_depth": 64, 00:05:30.629 "default_time2wait": 2, 00:05:30.629 "default_time2retain": 20, 00:05:30.629 "first_burst_length": 8192, 00:05:30.629 "immediate_data": true, 00:05:30.629 "allow_duplicated_isid": false, 00:05:30.629 "error_recovery_level": 0, 00:05:30.629 "nop_timeout": 60, 00:05:30.629 "nop_in_interval": 30, 00:05:30.629 "disable_chap": false, 00:05:30.629 "require_chap": false, 00:05:30.629 "mutual_chap": false, 00:05:30.629 "chap_group": 0, 00:05:30.629 "max_large_datain_per_connection": 64, 00:05:30.629 "max_r2t_per_connection": 4, 00:05:30.629 "pdu_pool_size": 36864, 00:05:30.630 "immediate_data_pool_size": 16384, 00:05:30.630 "data_out_pool_size": 2048 00:05:30.630 } 00:05:30.630 } 00:05:30.630 ] 00:05:30.630 } 00:05:30.630 ] 00:05:30.630 } 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57438 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57438 ']' 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57438 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57438 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.630 killing process with pid 57438 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57438' 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57438 00:05:30.630 21:56:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57438 00:05:32.014 21:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57472 00:05:32.014 21:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:32.014 21:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57472 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57472 ']' 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57472 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57472 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.298 killing process with pid 57472 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57472' 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57472 00:05:37.298 21:56:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57472 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.240 00:05:38.240 real 0m8.487s 00:05:38.240 user 0m8.122s 00:05:38.240 sys 0m0.591s 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.240 ************************************ 00:05:38.240 END TEST skip_rpc_with_json 00:05:38.240 ************************************ 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.240 21:56:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:38.240 21:56:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.240 21:56:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.240 21:56:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.240 ************************************ 00:05:38.240 START TEST skip_rpc_with_delay 00:05:38.240 ************************************ 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.240 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.241 [2024-12-06 21:56:10.925121] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.241 00:05:38.241 real 0m0.138s 00:05:38.241 user 0m0.078s 00:05:38.241 sys 0m0.059s 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.241 21:56:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.241 ************************************ 00:05:38.241 END TEST skip_rpc_with_delay 00:05:38.241 ************************************ 00:05:38.241 21:56:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.241 21:56:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.241 21:56:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.241 21:56:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.241 21:56:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.241 21:56:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.241 ************************************ 00:05:38.241 START TEST exit_on_failed_rpc_init 00:05:38.241 ************************************ 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57594 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57594 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57594 ']' 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.241 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.241 [2024-12-06 21:56:11.096778] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:38.241 [2024-12-06 21:56:11.096900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57594 ] 00:05:38.501 [2024-12-06 21:56:11.250126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.502 [2024-12-06 21:56:11.331089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.070 21:56:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.331 [2024-12-06 21:56:11.955231] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:39.331 [2024-12-06 21:56:11.955475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57611 ] 00:05:39.331 [2024-12-06 21:56:12.115600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.592 [2024-12-06 21:56:12.215226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.592 [2024-12-06 21:56:12.215476] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.592 [2024-12-06 21:56:12.215745] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.592 [2024-12-06 21:56:12.215887] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57594 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57594 ']' 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57594 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57594 00:05:39.592 killing process with pid 57594 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57594' 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57594 00:05:39.592 21:56:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57594 00:05:40.973 00:05:40.973 real 0m2.564s 00:05:40.973 user 0m2.839s 00:05:40.973 sys 0m0.385s 00:05:40.973 ************************************ 00:05:40.973 END TEST exit_on_failed_rpc_init 00:05:40.973 ************************************ 00:05:40.973 21:56:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.973 21:56:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 21:56:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.973 00:05:40.973 real 0m17.726s 00:05:40.973 user 0m17.034s 00:05:40.973 sys 0m1.472s 00:05:40.973 ************************************ 00:05:40.973 END TEST skip_rpc 00:05:40.973 ************************************ 00:05:40.973 21:56:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.973 21:56:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 21:56:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.973 21:56:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.973 21:56:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.973 21:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 ************************************ 00:05:40.973 START TEST rpc_client 00:05:40.973 ************************************ 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.973 * Looking for test storage... 00:05:40.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.973 21:56:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.973 --rc genhtml_branch_coverage=1 00:05:40.973 --rc genhtml_function_coverage=1 00:05:40.973 --rc genhtml_legend=1 00:05:40.973 --rc geninfo_all_blocks=1 00:05:40.973 --rc geninfo_unexecuted_blocks=1 00:05:40.973 00:05:40.973 ' 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.973 --rc genhtml_branch_coverage=1 00:05:40.973 --rc genhtml_function_coverage=1 00:05:40.973 --rc genhtml_legend=1 00:05:40.973 --rc geninfo_all_blocks=1 00:05:40.973 --rc geninfo_unexecuted_blocks=1 00:05:40.973 00:05:40.973 ' 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.973 --rc genhtml_branch_coverage=1 00:05:40.973 --rc genhtml_function_coverage=1 00:05:40.973 --rc genhtml_legend=1 00:05:40.973 --rc geninfo_all_blocks=1 00:05:40.973 --rc geninfo_unexecuted_blocks=1 00:05:40.973 00:05:40.973 ' 00:05:40.973 21:56:13 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.973 --rc genhtml_branch_coverage=1 00:05:40.973 --rc genhtml_function_coverage=1 00:05:40.973 --rc genhtml_legend=1 00:05:40.973 --rc geninfo_all_blocks=1 00:05:40.973 --rc geninfo_unexecuted_blocks=1 00:05:40.973 00:05:40.973 ' 00:05:40.973 21:56:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:41.234 OK 00:05:41.234 21:56:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.234 00:05:41.234 real 0m0.206s 00:05:41.234 user 0m0.124s 00:05:41.234 sys 0m0.091s 00:05:41.234 21:56:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.234 21:56:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 ************************************ 00:05:41.234 END TEST rpc_client 00:05:41.234 ************************************ 00:05:41.234 21:56:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.234 21:56:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.234 21:56:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.234 21:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 ************************************ 00:05:41.234 START TEST json_config 00:05:41.234 ************************************ 00:05:41.234 21:56:13 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.234 21:56:13 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.234 21:56:13 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.234 21:56:13 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.234 21:56:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.234 21:56:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.234 21:56:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.234 21:56:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.234 21:56:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.234 21:56:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.234 21:56:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.234 21:56:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.234 21:56:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.234 21:56:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.234 21:56:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.234 21:56:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.234 21:56:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.234 21:56:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.234 21:56:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.234 --rc genhtml_branch_coverage=1 00:05:41.234 --rc genhtml_function_coverage=1 00:05:41.234 --rc genhtml_legend=1 00:05:41.234 --rc geninfo_all_blocks=1 00:05:41.234 --rc geninfo_unexecuted_blocks=1 00:05:41.234 00:05:41.234 ' 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.234 --rc genhtml_branch_coverage=1 00:05:41.234 --rc genhtml_function_coverage=1 00:05:41.234 --rc genhtml_legend=1 00:05:41.234 --rc geninfo_all_blocks=1 00:05:41.234 --rc geninfo_unexecuted_blocks=1 00:05:41.234 00:05:41.234 ' 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.234 --rc genhtml_branch_coverage=1 00:05:41.234 --rc genhtml_function_coverage=1 00:05:41.234 --rc genhtml_legend=1 00:05:41.234 --rc geninfo_all_blocks=1 00:05:41.234 --rc geninfo_unexecuted_blocks=1 00:05:41.234 00:05:41.234 ' 00:05:41.234 21:56:14 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.234 --rc genhtml_branch_coverage=1 00:05:41.234 --rc genhtml_function_coverage=1 00:05:41.234 --rc genhtml_legend=1 00:05:41.234 --rc geninfo_all_blocks=1 00:05:41.234 --rc geninfo_unexecuted_blocks=1 00:05:41.234 00:05:41.234 ' 00:05:41.234 21:56:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.234 21:56:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.234 21:56:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.234 21:56:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.234 21:56:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.234 21:56:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.234 21:56:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.234 21:56:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.234 21:56:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.234 21:56:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.234 21:56:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.235 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.235 21:56:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.235 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.235 21:56:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:41.235 00:05:41.235 real 0m0.139s 00:05:41.235 user 0m0.091s 00:05:41.235 sys 0m0.048s 00:05:41.235 21:56:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.235 21:56:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 ************************************ 00:05:41.235 END TEST json_config 00:05:41.235 ************************************ 00:05:41.235 21:56:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.235 21:56:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.235 21:56:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.235 21:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.503 ************************************ 00:05:41.503 START TEST json_config_extra_key 00:05:41.503 ************************************ 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.503 21:56:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.503 21:56:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.504 --rc genhtml_branch_coverage=1 00:05:41.504 --rc genhtml_function_coverage=1 00:05:41.504 --rc genhtml_legend=1 00:05:41.504 --rc geninfo_all_blocks=1 00:05:41.504 --rc geninfo_unexecuted_blocks=1 00:05:41.504 00:05:41.504 ' 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.504 --rc genhtml_branch_coverage=1 00:05:41.504 --rc genhtml_function_coverage=1 00:05:41.504 --rc genhtml_legend=1 00:05:41.504 --rc geninfo_all_blocks=1 00:05:41.504 --rc geninfo_unexecuted_blocks=1 00:05:41.504 00:05:41.504 ' 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.504 --rc genhtml_branch_coverage=1 00:05:41.504 --rc genhtml_function_coverage=1 00:05:41.504 --rc genhtml_legend=1 00:05:41.504 --rc geninfo_all_blocks=1 00:05:41.504 --rc geninfo_unexecuted_blocks=1 00:05:41.504 00:05:41.504 ' 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.504 --rc genhtml_branch_coverage=1 00:05:41.504 --rc genhtml_function_coverage=1 00:05:41.504 --rc genhtml_legend=1 00:05:41.504 --rc geninfo_all_blocks=1 00:05:41.504 --rc geninfo_unexecuted_blocks=1 00:05:41.504 00:05:41.504 ' 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=decdbd46-e55c-4ed8-bfae-d89fc37da2e9 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.504 21:56:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.504 21:56:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.504 21:56:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.504 21:56:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.504 21:56:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.504 21:56:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.504 21:56:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.504 21:56:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.504 21:56:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.504 21:56:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.504 INFO: launching applications... 00:05:41.504 21:56:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57800 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.504 Waiting for target to run... 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57800 /var/tmp/spdk_tgt.sock 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57800 ']' 00:05:41.504 21:56:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.504 21:56:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.504 [2024-12-06 21:56:14.317224] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:41.504 [2024-12-06 21:56:14.317768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57800 ] 00:05:42.070 [2024-12-06 21:56:14.638596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.070 [2024-12-06 21:56:14.729406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.640 21:56:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.640 21:56:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.640 00:05:42.640 INFO: shutting down applications... 00:05:42.640 21:56:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.640 21:56:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57800 ]] 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57800 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57800 00:05:42.640 21:56:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.899 21:56:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.899 21:56:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.899 21:56:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57800 00:05:42.899 21:56:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.465 21:56:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.465 21:56:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.465 21:56:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57800 00:05:43.465 21:56:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.074 21:56:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.074 21:56:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.074 21:56:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57800 00:05:44.074 21:56:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57800 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.640 SPDK target shutdown done 00:05:44.640 Success 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.640 21:56:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.640 21:56:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.640 00:05:44.640 real 0m3.157s 00:05:44.640 user 0m2.627s 00:05:44.640 sys 0m0.370s 00:05:44.640 ************************************ 00:05:44.640 END TEST json_config_extra_key 00:05:44.640 ************************************ 00:05:44.640 21:56:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.640 21:56:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.640 21:56:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.640 21:56:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.640 21:56:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.640 21:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.640 ************************************ 00:05:44.640 START TEST alias_rpc 00:05:44.640 ************************************ 00:05:44.640 21:56:17 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.640 * Looking for test storage... 00:05:44.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:44.640 21:56:17 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.640 21:56:17 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.640 21:56:17 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.640 21:56:17 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.640 21:56:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.641 21:56:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.641 --rc genhtml_branch_coverage=1 00:05:44.641 --rc genhtml_function_coverage=1 00:05:44.641 --rc genhtml_legend=1 00:05:44.641 --rc geninfo_all_blocks=1 00:05:44.641 --rc geninfo_unexecuted_blocks=1 00:05:44.641 00:05:44.641 ' 00:05:44.641 21:56:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.641 21:56:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57893 00:05:44.641 21:56:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57893 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57893 ']' 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.641 21:56:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.641 21:56:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.641 [2024-12-06 21:56:17.509843] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:44.641 [2024-12-06 21:56:17.510091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57893 ] 00:05:44.902 [2024-12-06 21:56:17.659640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.902 [2024-12-06 21:56:17.741908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.837 21:56:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:45.837 21:56:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57893 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57893 ']' 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57893 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.837 21:56:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57893 00:05:45.838 killing process with pid 57893 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57893' 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 57893 00:05:45.838 21:56:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 57893 00:05:47.221 ************************************ 00:05:47.221 END TEST alias_rpc 00:05:47.221 ************************************ 00:05:47.221 00:05:47.221 real 0m2.479s 00:05:47.221 user 0m2.621s 00:05:47.221 sys 0m0.364s 00:05:47.221 21:56:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.221 21:56:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.221 21:56:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:47.221 21:56:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.221 21:56:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.221 21:56:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.221 21:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:47.221 ************************************ 00:05:47.221 START TEST spdkcli_tcp 00:05:47.221 ************************************ 00:05:47.221 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.221 * Looking for test storage... 00:05:47.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:47.221 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.221 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.221 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.221 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:47.221 21:56:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.222 21:56:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.222 --rc genhtml_branch_coverage=1 00:05:47.222 --rc genhtml_function_coverage=1 00:05:47.222 --rc genhtml_legend=1 00:05:47.222 --rc geninfo_all_blocks=1 00:05:47.222 --rc geninfo_unexecuted_blocks=1 00:05:47.222 00:05:47.222 ' 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.222 --rc genhtml_branch_coverage=1 00:05:47.222 --rc genhtml_function_coverage=1 00:05:47.222 --rc genhtml_legend=1 00:05:47.222 --rc geninfo_all_blocks=1 00:05:47.222 --rc geninfo_unexecuted_blocks=1 00:05:47.222 00:05:47.222 ' 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.222 --rc genhtml_branch_coverage=1 00:05:47.222 --rc genhtml_function_coverage=1 00:05:47.222 --rc genhtml_legend=1 00:05:47.222 --rc geninfo_all_blocks=1 00:05:47.222 --rc geninfo_unexecuted_blocks=1 00:05:47.222 00:05:47.222 ' 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.222 --rc genhtml_branch_coverage=1 00:05:47.222 --rc genhtml_function_coverage=1 00:05:47.222 --rc genhtml_legend=1 00:05:47.222 --rc geninfo_all_blocks=1 00:05:47.222 --rc geninfo_unexecuted_blocks=1 00:05:47.222 00:05:47.222 ' 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57984 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57984 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57984 ']' 00:05:47.222 21:56:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.222 21:56:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.222 [2024-12-06 21:56:20.071233] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:47.222 [2024-12-06 21:56:20.071779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57984 ] 00:05:47.483 [2024-12-06 21:56:20.229481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.483 [2024-12-06 21:56:20.329624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.483 [2024-12-06 21:56:20.329706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.057 21:56:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.057 21:56:20 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:48.057 21:56:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58001 00:05:48.057 21:56:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.057 21:56:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.317 [ 00:05:48.317 "bdev_malloc_delete", 00:05:48.317 "bdev_malloc_create", 00:05:48.317 "bdev_null_resize", 00:05:48.317 "bdev_null_delete", 00:05:48.317 "bdev_null_create", 00:05:48.317 "bdev_nvme_cuse_unregister", 00:05:48.317 "bdev_nvme_cuse_register", 00:05:48.317 "bdev_opal_new_user", 00:05:48.317 "bdev_opal_set_lock_state", 00:05:48.317 "bdev_opal_delete", 00:05:48.317 "bdev_opal_get_info", 00:05:48.317 "bdev_opal_create", 00:05:48.317 "bdev_nvme_opal_revert", 00:05:48.317 "bdev_nvme_opal_init", 00:05:48.317 "bdev_nvme_send_cmd", 00:05:48.317 "bdev_nvme_set_keys", 00:05:48.317 "bdev_nvme_get_path_iostat", 00:05:48.317 "bdev_nvme_get_mdns_discovery_info", 00:05:48.317 "bdev_nvme_stop_mdns_discovery", 00:05:48.317 "bdev_nvme_start_mdns_discovery", 00:05:48.317 "bdev_nvme_set_multipath_policy", 00:05:48.317 "bdev_nvme_set_preferred_path", 00:05:48.317 "bdev_nvme_get_io_paths", 00:05:48.317 "bdev_nvme_remove_error_injection", 00:05:48.317 "bdev_nvme_add_error_injection", 00:05:48.317 "bdev_nvme_get_discovery_info", 00:05:48.317 "bdev_nvme_stop_discovery", 00:05:48.317 "bdev_nvme_start_discovery", 00:05:48.317 "bdev_nvme_get_controller_health_info", 00:05:48.318 "bdev_nvme_disable_controller", 00:05:48.318 "bdev_nvme_enable_controller", 00:05:48.318 "bdev_nvme_reset_controller", 00:05:48.318 "bdev_nvme_get_transport_statistics", 00:05:48.318 "bdev_nvme_apply_firmware", 00:05:48.318 "bdev_nvme_detach_controller", 00:05:48.318 "bdev_nvme_get_controllers", 00:05:48.318 "bdev_nvme_attach_controller", 00:05:48.318 "bdev_nvme_set_hotplug", 00:05:48.318 "bdev_nvme_set_options", 00:05:48.318 "bdev_passthru_delete", 00:05:48.318 "bdev_passthru_create", 00:05:48.318 "bdev_lvol_set_parent_bdev", 00:05:48.318 "bdev_lvol_set_parent", 00:05:48.318 "bdev_lvol_check_shallow_copy", 00:05:48.318 "bdev_lvol_start_shallow_copy", 00:05:48.318 "bdev_lvol_grow_lvstore", 00:05:48.318 "bdev_lvol_get_lvols", 00:05:48.318 "bdev_lvol_get_lvstores", 00:05:48.318 "bdev_lvol_delete", 00:05:48.318 "bdev_lvol_set_read_only", 00:05:48.318 "bdev_lvol_resize", 00:05:48.318 "bdev_lvol_decouple_parent", 00:05:48.318 "bdev_lvol_inflate", 00:05:48.318 "bdev_lvol_rename", 00:05:48.318 "bdev_lvol_clone_bdev", 00:05:48.318 "bdev_lvol_clone", 00:05:48.318 "bdev_lvol_snapshot", 00:05:48.318 "bdev_lvol_create", 00:05:48.318 "bdev_lvol_delete_lvstore", 00:05:48.318 "bdev_lvol_rename_lvstore", 00:05:48.318 "bdev_lvol_create_lvstore", 00:05:48.318 "bdev_raid_set_options", 00:05:48.318 "bdev_raid_remove_base_bdev", 00:05:48.318 "bdev_raid_add_base_bdev", 00:05:48.318 "bdev_raid_delete", 00:05:48.318 "bdev_raid_create", 00:05:48.318 "bdev_raid_get_bdevs", 00:05:48.318 "bdev_error_inject_error", 00:05:48.318 "bdev_error_delete", 00:05:48.318 "bdev_error_create", 00:05:48.318 "bdev_split_delete", 00:05:48.318 "bdev_split_create", 00:05:48.318 "bdev_delay_delete", 00:05:48.318 "bdev_delay_create", 00:05:48.318 "bdev_delay_update_latency", 00:05:48.318 "bdev_zone_block_delete", 00:05:48.318 "bdev_zone_block_create", 00:05:48.318 "blobfs_create", 00:05:48.318 "blobfs_detect", 00:05:48.318 "blobfs_set_cache_size", 00:05:48.318 "bdev_xnvme_delete", 00:05:48.318 "bdev_xnvme_create", 00:05:48.318 "bdev_aio_delete", 00:05:48.318 "bdev_aio_rescan", 00:05:48.318 "bdev_aio_create", 00:05:48.318 "bdev_ftl_set_property", 00:05:48.318 "bdev_ftl_get_properties", 00:05:48.318 "bdev_ftl_get_stats", 00:05:48.318 "bdev_ftl_unmap", 00:05:48.318 "bdev_ftl_unload", 00:05:48.318 "bdev_ftl_delete", 00:05:48.318 "bdev_ftl_load", 00:05:48.318 "bdev_ftl_create", 00:05:48.318 "bdev_virtio_attach_controller", 00:05:48.318 "bdev_virtio_scsi_get_devices", 00:05:48.318 "bdev_virtio_detach_controller", 00:05:48.318 "bdev_virtio_blk_set_hotplug", 00:05:48.318 "bdev_iscsi_delete", 00:05:48.318 "bdev_iscsi_create", 00:05:48.318 "bdev_iscsi_set_options", 00:05:48.318 "accel_error_inject_error", 00:05:48.318 "ioat_scan_accel_module", 00:05:48.318 "dsa_scan_accel_module", 00:05:48.318 "iaa_scan_accel_module", 00:05:48.318 "keyring_file_remove_key", 00:05:48.318 "keyring_file_add_key", 00:05:48.318 "keyring_linux_set_options", 00:05:48.318 "fsdev_aio_delete", 00:05:48.318 "fsdev_aio_create", 00:05:48.318 "iscsi_get_histogram", 00:05:48.318 "iscsi_enable_histogram", 00:05:48.318 "iscsi_set_options", 00:05:48.318 "iscsi_get_auth_groups", 00:05:48.318 "iscsi_auth_group_remove_secret", 00:05:48.318 "iscsi_auth_group_add_secret", 00:05:48.318 "iscsi_delete_auth_group", 00:05:48.318 "iscsi_create_auth_group", 00:05:48.318 "iscsi_set_discovery_auth", 00:05:48.318 "iscsi_get_options", 00:05:48.318 "iscsi_target_node_request_logout", 00:05:48.318 "iscsi_target_node_set_redirect", 00:05:48.318 "iscsi_target_node_set_auth", 00:05:48.318 "iscsi_target_node_add_lun", 00:05:48.318 "iscsi_get_stats", 00:05:48.318 "iscsi_get_connections", 00:05:48.318 "iscsi_portal_group_set_auth", 00:05:48.318 "iscsi_start_portal_group", 00:05:48.318 "iscsi_delete_portal_group", 00:05:48.318 "iscsi_create_portal_group", 00:05:48.318 "iscsi_get_portal_groups", 00:05:48.318 "iscsi_delete_target_node", 00:05:48.318 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.318 "iscsi_target_node_add_pg_ig_maps", 00:05:48.318 "iscsi_create_target_node", 00:05:48.318 "iscsi_get_target_nodes", 00:05:48.318 "iscsi_delete_initiator_group", 00:05:48.318 "iscsi_initiator_group_remove_initiators", 00:05:48.318 "iscsi_initiator_group_add_initiators", 00:05:48.318 "iscsi_create_initiator_group", 00:05:48.318 "iscsi_get_initiator_groups", 00:05:48.318 "nvmf_set_crdt", 00:05:48.318 "nvmf_set_config", 00:05:48.318 "nvmf_set_max_subsystems", 00:05:48.318 "nvmf_stop_mdns_prr", 00:05:48.318 "nvmf_publish_mdns_prr", 00:05:48.318 "nvmf_subsystem_get_listeners", 00:05:48.318 "nvmf_subsystem_get_qpairs", 00:05:48.318 "nvmf_subsystem_get_controllers", 00:05:48.318 "nvmf_get_stats", 00:05:48.318 "nvmf_get_transports", 00:05:48.318 "nvmf_create_transport", 00:05:48.318 "nvmf_get_targets", 00:05:48.318 "nvmf_delete_target", 00:05:48.318 "nvmf_create_target", 00:05:48.318 "nvmf_subsystem_allow_any_host", 00:05:48.318 "nvmf_subsystem_set_keys", 00:05:48.318 "nvmf_subsystem_remove_host", 00:05:48.318 "nvmf_subsystem_add_host", 00:05:48.318 "nvmf_ns_remove_host", 00:05:48.318 "nvmf_ns_add_host", 00:05:48.318 "nvmf_subsystem_remove_ns", 00:05:48.318 "nvmf_subsystem_set_ns_ana_group", 00:05:48.318 "nvmf_subsystem_add_ns", 00:05:48.318 "nvmf_subsystem_listener_set_ana_state", 00:05:48.318 "nvmf_discovery_get_referrals", 00:05:48.318 "nvmf_discovery_remove_referral", 00:05:48.318 "nvmf_discovery_add_referral", 00:05:48.318 "nvmf_subsystem_remove_listener", 00:05:48.318 "nvmf_subsystem_add_listener", 00:05:48.318 "nvmf_delete_subsystem", 00:05:48.318 "nvmf_create_subsystem", 00:05:48.318 "nvmf_get_subsystems", 00:05:48.318 "env_dpdk_get_mem_stats", 00:05:48.318 "nbd_get_disks", 00:05:48.318 "nbd_stop_disk", 00:05:48.318 "nbd_start_disk", 00:05:48.318 "ublk_recover_disk", 00:05:48.318 "ublk_get_disks", 00:05:48.318 "ublk_stop_disk", 00:05:48.318 "ublk_start_disk", 00:05:48.318 "ublk_destroy_target", 00:05:48.318 "ublk_create_target", 00:05:48.318 "virtio_blk_create_transport", 00:05:48.318 "virtio_blk_get_transports", 00:05:48.318 "vhost_controller_set_coalescing", 00:05:48.318 "vhost_get_controllers", 00:05:48.318 "vhost_delete_controller", 00:05:48.318 "vhost_create_blk_controller", 00:05:48.318 "vhost_scsi_controller_remove_target", 00:05:48.318 "vhost_scsi_controller_add_target", 00:05:48.318 "vhost_start_scsi_controller", 00:05:48.318 "vhost_create_scsi_controller", 00:05:48.318 "thread_set_cpumask", 00:05:48.318 "scheduler_set_options", 00:05:48.318 "framework_get_governor", 00:05:48.318 "framework_get_scheduler", 00:05:48.318 "framework_set_scheduler", 00:05:48.318 "framework_get_reactors", 00:05:48.318 "thread_get_io_channels", 00:05:48.318 "thread_get_pollers", 00:05:48.318 "thread_get_stats", 00:05:48.318 "framework_monitor_context_switch", 00:05:48.318 "spdk_kill_instance", 00:05:48.318 "log_enable_timestamps", 00:05:48.318 "log_get_flags", 00:05:48.318 "log_clear_flag", 00:05:48.318 "log_set_flag", 00:05:48.318 "log_get_level", 00:05:48.318 "log_set_level", 00:05:48.318 "log_get_print_level", 00:05:48.318 "log_set_print_level", 00:05:48.318 "framework_enable_cpumask_locks", 00:05:48.318 "framework_disable_cpumask_locks", 00:05:48.318 "framework_wait_init", 00:05:48.318 "framework_start_init", 00:05:48.318 "scsi_get_devices", 00:05:48.318 "bdev_get_histogram", 00:05:48.318 "bdev_enable_histogram", 00:05:48.318 "bdev_set_qos_limit", 00:05:48.318 "bdev_set_qd_sampling_period", 00:05:48.318 "bdev_get_bdevs", 00:05:48.318 "bdev_reset_iostat", 00:05:48.318 "bdev_get_iostat", 00:05:48.318 "bdev_examine", 00:05:48.318 "bdev_wait_for_examine", 00:05:48.318 "bdev_set_options", 00:05:48.318 "accel_get_stats", 00:05:48.318 "accel_set_options", 00:05:48.318 "accel_set_driver", 00:05:48.318 "accel_crypto_key_destroy", 00:05:48.318 "accel_crypto_keys_get", 00:05:48.318 "accel_crypto_key_create", 00:05:48.318 "accel_assign_opc", 00:05:48.318 "accel_get_module_info", 00:05:48.318 "accel_get_opc_assignments", 00:05:48.318 "vmd_rescan", 00:05:48.318 "vmd_remove_device", 00:05:48.318 "vmd_enable", 00:05:48.318 "sock_get_default_impl", 00:05:48.318 "sock_set_default_impl", 00:05:48.318 "sock_impl_set_options", 00:05:48.318 "sock_impl_get_options", 00:05:48.318 "iobuf_get_stats", 00:05:48.318 "iobuf_set_options", 00:05:48.318 "keyring_get_keys", 00:05:48.318 "framework_get_pci_devices", 00:05:48.318 "framework_get_config", 00:05:48.318 "framework_get_subsystems", 00:05:48.318 "fsdev_set_opts", 00:05:48.318 "fsdev_get_opts", 00:05:48.318 "trace_get_info", 00:05:48.318 "trace_get_tpoint_group_mask", 00:05:48.318 "trace_disable_tpoint_group", 00:05:48.318 "trace_enable_tpoint_group", 00:05:48.318 "trace_clear_tpoint_mask", 00:05:48.318 "trace_set_tpoint_mask", 00:05:48.318 "notify_get_notifications", 00:05:48.318 "notify_get_types", 00:05:48.318 "spdk_get_version", 00:05:48.318 "rpc_get_methods" 00:05:48.318 ] 00:05:48.318 21:56:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.318 21:56:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.318 21:56:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.318 21:56:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:48.318 21:56:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57984 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57984 ']' 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57984 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57984 00:05:48.319 killing process with pid 57984 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57984' 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57984 00:05:48.319 21:56:21 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57984 00:05:50.228 ************************************ 00:05:50.228 END TEST spdkcli_tcp 00:05:50.228 ************************************ 00:05:50.228 00:05:50.228 real 0m2.878s 00:05:50.228 user 0m5.134s 00:05:50.228 sys 0m0.437s 00:05:50.228 21:56:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.228 21:56:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.228 21:56:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.228 21:56:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.228 21:56:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.229 21:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:50.229 ************************************ 00:05:50.229 START TEST dpdk_mem_utility 00:05:50.229 ************************************ 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.229 * Looking for test storage... 00:05:50.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.229 21:56:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.229 --rc genhtml_branch_coverage=1 00:05:50.229 --rc genhtml_function_coverage=1 00:05:50.229 --rc genhtml_legend=1 00:05:50.229 --rc geninfo_all_blocks=1 00:05:50.229 --rc geninfo_unexecuted_blocks=1 00:05:50.229 00:05:50.229 ' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.229 --rc genhtml_branch_coverage=1 00:05:50.229 --rc genhtml_function_coverage=1 00:05:50.229 --rc genhtml_legend=1 00:05:50.229 --rc geninfo_all_blocks=1 00:05:50.229 --rc geninfo_unexecuted_blocks=1 00:05:50.229 00:05:50.229 ' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.229 --rc genhtml_branch_coverage=1 00:05:50.229 --rc genhtml_function_coverage=1 00:05:50.229 --rc genhtml_legend=1 00:05:50.229 --rc geninfo_all_blocks=1 00:05:50.229 --rc geninfo_unexecuted_blocks=1 00:05:50.229 00:05:50.229 ' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.229 --rc genhtml_branch_coverage=1 00:05:50.229 --rc genhtml_function_coverage=1 00:05:50.229 --rc genhtml_legend=1 00:05:50.229 --rc geninfo_all_blocks=1 00:05:50.229 --rc geninfo_unexecuted_blocks=1 00:05:50.229 00:05:50.229 ' 00:05:50.229 21:56:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.229 21:56:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58095 00:05:50.229 21:56:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58095 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58095 ']' 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.229 21:56:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.229 21:56:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.229 [2024-12-06 21:56:22.974957] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:50.229 [2024-12-06 21:56:22.975241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58095 ] 00:05:50.490 [2024-12-06 21:56:23.138635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.490 [2024-12-06 21:56:23.243732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.062 21:56:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.062 21:56:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:51.062 21:56:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.062 21:56:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.063 21:56:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.063 21:56:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.063 { 00:05:51.063 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.063 } 00:05:51.063 21:56:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.063 21:56:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:51.326 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:51.326 1 heaps totaling size 824.000000 MiB 00:05:51.326 size: 824.000000 MiB heap id: 0 00:05:51.326 end heaps---------- 00:05:51.326 9 mempools totaling size 603.782043 MiB 00:05:51.326 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.326 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.326 size: 100.555481 MiB name: bdev_io_58095 00:05:51.326 size: 50.003479 MiB name: msgpool_58095 00:05:51.326 size: 36.509338 MiB name: fsdev_io_58095 00:05:51.326 size: 21.763794 MiB name: PDU_Pool 00:05:51.326 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.326 size: 4.133484 MiB name: evtpool_58095 00:05:51.326 size: 0.026123 MiB name: Session_Pool 00:05:51.326 end mempools------- 00:05:51.326 6 memzones totaling size 4.142822 MiB 00:05:51.326 size: 1.000366 MiB name: RG_ring_0_58095 00:05:51.326 size: 1.000366 MiB name: RG_ring_1_58095 00:05:51.326 size: 1.000366 MiB name: RG_ring_4_58095 00:05:51.326 size: 1.000366 MiB name: RG_ring_5_58095 00:05:51.326 size: 0.125366 MiB name: RG_ring_2_58095 00:05:51.326 size: 0.015991 MiB name: RG_ring_3_58095 00:05:51.326 end memzones------- 00:05:51.326 21:56:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.326 heap id: 0 total size: 824.000000 MiB number of busy elements: 330 number of free elements: 18 00:05:51.326 list of free elements. size: 16.777710 MiB 00:05:51.326 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:51.326 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:51.326 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:51.326 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:51.326 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:51.326 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:51.326 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:51.326 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:51.326 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:51.326 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:51.326 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:51.326 element at address: 0x20001b400000 with size: 0.558777 MiB 00:05:51.326 element at address: 0x200000c00000 with size: 0.489441 MiB 00:05:51.326 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:51.326 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:51.326 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:51.326 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:51.326 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:51.326 list of standard malloc elements. size: 199.291382 MiB 00:05:51.326 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:51.326 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:51.326 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:51.326 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:51.326 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:51.327 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:51.327 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:51.327 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:51.327 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:51.327 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:51.327 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:51.327 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f0c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:51.327 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:51.327 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:51.328 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:51.328 list of memzone associated elements. size: 607.930908 MiB 00:05:51.328 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:51.328 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.328 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:51.328 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.328 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:51.328 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58095_0 00:05:51.328 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:51.328 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58095_0 00:05:51.328 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:51.328 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58095_0 00:05:51.328 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:51.328 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.328 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:51.328 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.328 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:51.328 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58095_0 00:05:51.328 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:51.328 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58095 00:05:51.328 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:51.328 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58095 00:05:51.328 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:51.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.328 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:51.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.328 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:51.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.328 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:51.328 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.328 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:51.328 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58095 00:05:51.328 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:51.328 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58095 00:05:51.328 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:51.328 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58095 00:05:51.328 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:51.328 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58095 00:05:51.328 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:51.328 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58095 00:05:51.328 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:51.328 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58095 00:05:51.328 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:51.328 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.328 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:51.328 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.328 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:51.328 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.328 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:51.328 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58095 00:05:51.328 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:51.328 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58095 00:05:51.328 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:51.328 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.328 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:51.328 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.328 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:51.328 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58095 00:05:51.328 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:51.328 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.328 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:51.328 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58095 00:05:51.328 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:51.328 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58095 00:05:51.328 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:51.328 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58095 00:05:51.328 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:51.328 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.328 21:56:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.328 21:56:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58095 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58095 ']' 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58095 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58095 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.328 killing process with pid 58095 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58095' 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58095 00:05:51.328 21:56:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58095 00:05:53.227 00:05:53.227 real 0m2.850s 00:05:53.227 user 0m2.793s 00:05:53.227 sys 0m0.484s 00:05:53.227 21:56:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.227 21:56:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.227 ************************************ 00:05:53.227 END TEST dpdk_mem_utility 00:05:53.227 ************************************ 00:05:53.227 21:56:25 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.227 21:56:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.227 21:56:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.227 21:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.227 ************************************ 00:05:53.227 START TEST event 00:05:53.227 ************************************ 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.227 * Looking for test storage... 00:05:53.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.227 21:56:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.227 21:56:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.227 21:56:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.227 21:56:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.227 21:56:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.227 21:56:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.227 21:56:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.227 21:56:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.227 21:56:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.227 21:56:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.227 21:56:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.227 21:56:25 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.227 21:56:25 event -- scripts/common.sh@345 -- # : 1 00:05:53.227 21:56:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.227 21:56:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.227 21:56:25 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.227 21:56:25 event -- scripts/common.sh@353 -- # local d=1 00:05:53.227 21:56:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.227 21:56:25 event -- scripts/common.sh@355 -- # echo 1 00:05:53.227 21:56:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.227 21:56:25 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.227 21:56:25 event -- scripts/common.sh@353 -- # local d=2 00:05:53.227 21:56:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.227 21:56:25 event -- scripts/common.sh@355 -- # echo 2 00:05:53.227 21:56:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.227 21:56:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.227 21:56:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.227 21:56:25 event -- scripts/common.sh@368 -- # return 0 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.227 --rc genhtml_branch_coverage=1 00:05:53.227 --rc genhtml_function_coverage=1 00:05:53.227 --rc genhtml_legend=1 00:05:53.227 --rc geninfo_all_blocks=1 00:05:53.227 --rc geninfo_unexecuted_blocks=1 00:05:53.227 00:05:53.227 ' 00:05:53.227 21:56:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.227 21:56:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.227 21:56:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:53.227 21:56:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.227 21:56:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.227 ************************************ 00:05:53.227 START TEST event_perf 00:05:53.227 ************************************ 00:05:53.227 21:56:25 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.227 Running I/O for 1 seconds...[2024-12-06 21:56:25.832656] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:53.227 [2024-12-06 21:56:25.832760] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58192 ] 00:05:53.227 [2024-12-06 21:56:25.992677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.525 [2024-12-06 21:56:26.097467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.525 [2024-12-06 21:56:26.098007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.525 [2024-12-06 21:56:26.098111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.525 Running I/O for 1 seconds...[2024-12-06 21:56:26.098131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.455 00:05:54.455 lcore 0: 157928 00:05:54.455 lcore 1: 157931 00:05:54.455 lcore 2: 157934 00:05:54.455 lcore 3: 157925 00:05:54.455 done. 00:05:54.455 00:05:54.455 ************************************ 00:05:54.455 END TEST event_perf 00:05:54.455 ************************************ 00:05:54.455 real 0m1.463s 00:05:54.455 user 0m4.246s 00:05:54.455 sys 0m0.092s 00:05:54.455 21:56:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.455 21:56:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.455 21:56:27 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.455 21:56:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.455 21:56:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.455 21:56:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.715 ************************************ 00:05:54.715 START TEST event_reactor 00:05:54.715 ************************************ 00:05:54.715 21:56:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.715 [2024-12-06 21:56:27.360304] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:54.715 [2024-12-06 21:56:27.360489] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58226 ] 00:05:54.715 [2024-12-06 21:56:27.523257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.973 [2024-12-06 21:56:27.629079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.906 test_start 00:05:55.906 oneshot 00:05:55.906 tick 100 00:05:55.906 tick 100 00:05:55.906 tick 250 00:05:55.906 tick 100 00:05:55.906 tick 100 00:05:55.906 tick 100 00:05:55.906 tick 250 00:05:55.906 tick 500 00:05:55.906 tick 100 00:05:55.906 tick 100 00:05:55.906 tick 250 00:05:55.906 tick 100 00:05:55.906 tick 100 00:05:55.906 test_end 00:05:56.163 00:05:56.163 real 0m1.461s 00:05:56.163 user 0m1.275s 00:05:56.163 sys 0m0.076s 00:05:56.163 21:56:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.163 21:56:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.163 ************************************ 00:05:56.163 END TEST event_reactor 00:05:56.163 ************************************ 00:05:56.164 21:56:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.164 21:56:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.164 21:56:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.164 21:56:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 START TEST event_reactor_perf 00:05:56.164 ************************************ 00:05:56.164 21:56:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.164 [2024-12-06 21:56:28.879465] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:56.164 [2024-12-06 21:56:28.879596] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:05:56.421 [2024-12-06 21:56:29.057587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.421 [2024-12-06 21:56:29.155520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.794 test_start 00:05:57.794 test_end 00:05:57.794 Performance: 313947 events per second 00:05:57.794 00:05:57.794 real 0m1.467s 00:05:57.794 user 0m1.282s 00:05:57.794 sys 0m0.075s 00:05:57.794 21:56:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.794 ************************************ 00:05:57.794 END TEST event_reactor_perf 00:05:57.794 ************************************ 00:05:57.794 21:56:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.794 21:56:30 event -- event/event.sh@49 -- # uname -s 00:05:57.794 21:56:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.794 21:56:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.794 21:56:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.794 21:56:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.794 21:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.794 ************************************ 00:05:57.794 START TEST event_scheduler 00:05:57.794 ************************************ 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.794 * Looking for test storage... 00:05:57.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.794 21:56:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.794 --rc genhtml_branch_coverage=1 00:05:57.794 --rc genhtml_function_coverage=1 00:05:57.794 --rc genhtml_legend=1 00:05:57.794 --rc geninfo_all_blocks=1 00:05:57.794 --rc geninfo_unexecuted_blocks=1 00:05:57.794 00:05:57.794 ' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.794 --rc genhtml_branch_coverage=1 00:05:57.794 --rc genhtml_function_coverage=1 00:05:57.794 --rc genhtml_legend=1 00:05:57.794 --rc geninfo_all_blocks=1 00:05:57.794 --rc geninfo_unexecuted_blocks=1 00:05:57.794 00:05:57.794 ' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.794 --rc genhtml_branch_coverage=1 00:05:57.794 --rc genhtml_function_coverage=1 00:05:57.794 --rc genhtml_legend=1 00:05:57.794 --rc geninfo_all_blocks=1 00:05:57.794 --rc geninfo_unexecuted_blocks=1 00:05:57.794 00:05:57.794 ' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.794 --rc genhtml_branch_coverage=1 00:05:57.794 --rc genhtml_function_coverage=1 00:05:57.794 --rc genhtml_legend=1 00:05:57.794 --rc geninfo_all_blocks=1 00:05:57.794 --rc geninfo_unexecuted_blocks=1 00:05:57.794 00:05:57.794 ' 00:05:57.794 21:56:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.794 21:56:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58337 00:05:57.794 21:56:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.794 21:56:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58337 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58337 ']' 00:05:57.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.794 21:56:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.794 21:56:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.794 [2024-12-06 21:56:30.601426] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:05:57.795 [2024-12-06 21:56:30.601554] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58337 ] 00:05:58.051 [2024-12-06 21:56:30.764206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.051 [2024-12-06 21:56:30.870122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.051 [2024-12-06 21:56:30.870736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.051 [2024-12-06 21:56:30.871020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.051 [2024-12-06 21:56:30.871143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:58.615 21:56:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.615 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.615 POWER: Cannot set governor of lcore 0 to performance 00:05:58.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.615 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.615 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.615 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:58.615 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:58.615 POWER: Unable to set Power Management Environment for lcore 0 00:05:58.615 [2024-12-06 21:56:31.440343] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:58.615 [2024-12-06 21:56:31.440381] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:58.615 [2024-12-06 21:56:31.440391] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.615 [2024-12-06 21:56:31.440407] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.615 [2024-12-06 21:56:31.440415] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.615 [2024-12-06 21:56:31.440423] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.615 21:56:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.615 21:56:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 [2024-12-06 21:56:31.669066] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.871 21:56:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.871 21:56:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.871 21:56:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 ************************************ 00:05:58.871 START TEST scheduler_create_thread 00:05:58.871 ************************************ 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 2 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 3 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 4 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 5 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 6 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 7 00:05:58.871 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 8 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 9 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 10 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.127 21:56:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.517 21:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.517 21:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:00.517 21:56:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:00.517 21:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.517 21:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.452 21:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.452 ************************************ 00:06:01.452 END TEST scheduler_create_thread 00:06:01.452 ************************************ 00:06:01.452 00:06:01.452 real 0m2.616s 00:06:01.452 user 0m0.016s 00:06:01.452 sys 0m0.004s 00:06:01.452 21:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.452 21:56:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.713 21:56:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.713 21:56:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58337 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58337 ']' 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58337 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58337 00:06:01.713 killing process with pid 58337 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58337' 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58337 00:06:01.713 21:56:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58337 00:06:01.974 [2024-12-06 21:56:34.784279] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.914 00:06:02.914 real 0m5.208s 00:06:02.914 user 0m9.065s 00:06:02.914 sys 0m0.361s 00:06:02.914 ************************************ 00:06:02.914 END TEST event_scheduler 00:06:02.914 ************************************ 00:06:02.914 21:56:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.914 21:56:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.914 21:56:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.914 21:56:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.914 21:56:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.914 21:56:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.914 21:56:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.914 ************************************ 00:06:02.914 START TEST app_repeat 00:06:02.914 ************************************ 00:06:02.914 21:56:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58439 00:06:02.914 Process app_repeat pid: 58439 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58439' 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.914 spdk_app_start Round 0 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.914 21:56:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58439 /var/tmp/spdk-nbd.sock 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58439 ']' 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.915 21:56:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.915 [2024-12-06 21:56:35.715149] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:02.915 [2024-12-06 21:56:35.715284] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:06:03.174 [2024-12-06 21:56:35.880573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.174 [2024-12-06 21:56:36.015397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.174 [2024-12-06 21:56:36.015549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.772 21:56:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.772 21:56:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:03.772 21:56:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.030 Malloc0 00:06:04.030 21:56:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.288 Malloc1 00:06:04.288 21:56:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.288 21:56:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.546 /dev/nbd0 00:06:04.546 21:56:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.546 21:56:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.546 1+0 records in 00:06:04.546 1+0 records out 00:06:04.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143111 s, 28.6 MB/s 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.546 21:56:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.546 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.546 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.546 21:56:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.805 /dev/nbd1 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.805 1+0 records in 00:06:04.805 1+0 records out 00:06:04.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453847 s, 9.0 MB/s 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.805 21:56:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.805 21:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.085 21:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.085 { 00:06:05.085 "nbd_device": "/dev/nbd0", 00:06:05.085 "bdev_name": "Malloc0" 00:06:05.085 }, 00:06:05.085 { 00:06:05.085 "nbd_device": "/dev/nbd1", 00:06:05.086 "bdev_name": "Malloc1" 00:06:05.086 } 00:06:05.086 ]' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.086 { 00:06:05.086 "nbd_device": "/dev/nbd0", 00:06:05.086 "bdev_name": "Malloc0" 00:06:05.086 }, 00:06:05.086 { 00:06:05.086 "nbd_device": "/dev/nbd1", 00:06:05.086 "bdev_name": "Malloc1" 00:06:05.086 } 00:06:05.086 ]' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.086 /dev/nbd1' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.086 /dev/nbd1' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.086 256+0 records in 00:06:05.086 256+0 records out 00:06:05.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699247 s, 150 MB/s 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.086 256+0 records in 00:06:05.086 256+0 records out 00:06:05.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161328 s, 65.0 MB/s 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.086 21:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.345 256+0 records in 00:06:05.345 256+0 records out 00:06:05.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0464283 s, 22.6 MB/s 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.345 21:56:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.345 21:56:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.602 21:56:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.859 21:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.860 21:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.860 21:56:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.860 21:56:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.860 21:56:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.860 21:56:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.860 21:56:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.117 21:56:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.049 [2024-12-06 21:56:39.745949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.049 [2024-12-06 21:56:39.838076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.049 [2024-12-06 21:56:39.838234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.307 [2024-12-06 21:56:39.961120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.307 [2024-12-06 21:56:39.961183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.208 21:56:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.208 spdk_app_start Round 1 00:06:09.208 21:56:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.208 21:56:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58439 /var/tmp/spdk-nbd.sock 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58439 ']' 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.208 21:56:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.466 21:56:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.466 21:56:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.466 21:56:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.723 Malloc0 00:06:09.723 21:56:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.982 Malloc1 00:06:09.982 21:56:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.982 21:56:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.240 /dev/nbd0 00:06:10.240 21:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.240 21:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.240 1+0 records in 00:06:10.240 1+0 records out 00:06:10.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371956 s, 11.0 MB/s 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.240 21:56:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.240 21:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.240 21:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.240 21:56:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.497 /dev/nbd1 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.497 1+0 records in 00:06:10.497 1+0 records out 00:06:10.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458182 s, 8.9 MB/s 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.497 21:56:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.497 21:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.755 { 00:06:10.755 "nbd_device": "/dev/nbd0", 00:06:10.755 "bdev_name": "Malloc0" 00:06:10.755 }, 00:06:10.755 { 00:06:10.755 "nbd_device": "/dev/nbd1", 00:06:10.755 "bdev_name": "Malloc1" 00:06:10.755 } 00:06:10.755 ]' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.755 { 00:06:10.755 "nbd_device": "/dev/nbd0", 00:06:10.755 "bdev_name": "Malloc0" 00:06:10.755 }, 00:06:10.755 { 00:06:10.755 "nbd_device": "/dev/nbd1", 00:06:10.755 "bdev_name": "Malloc1" 00:06:10.755 } 00:06:10.755 ]' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.755 /dev/nbd1' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.755 /dev/nbd1' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.755 21:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.756 256+0 records in 00:06:10.756 256+0 records out 00:06:10.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443085 s, 237 MB/s 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.756 256+0 records in 00:06:10.756 256+0 records out 00:06:10.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182615 s, 57.4 MB/s 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.756 256+0 records in 00:06:10.756 256+0 records out 00:06:10.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201182 s, 52.1 MB/s 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.756 21:56:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.014 21:56:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.272 21:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.272 21:56:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.272 21:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.272 21:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.529 21:56:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.529 21:56:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.787 21:56:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.725 [2024-12-06 21:56:45.228831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.725 [2024-12-06 21:56:45.337773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.725 [2024-12-06 21:56:45.337917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.725 [2024-12-06 21:56:45.469307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.725 [2024-12-06 21:56:45.469384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.627 spdk_app_start Round 2 00:06:14.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.627 21:56:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.627 21:56:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.627 21:56:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58439 /var/tmp/spdk-nbd.sock 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58439 ']' 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.627 21:56:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.887 21:56:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.887 21:56:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.887 21:56:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.145 Malloc0 00:06:15.145 21:56:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.403 Malloc1 00:06:15.403 21:56:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.403 21:56:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.404 21:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.661 /dev/nbd0 00:06:15.661 21:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.661 21:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.661 1+0 records in 00:06:15.661 1+0 records out 00:06:15.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581586 s, 7.0 MB/s 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.661 21:56:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.661 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.661 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.661 21:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.918 /dev/nbd1 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.918 1+0 records in 00:06:15.918 1+0 records out 00:06:15.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352186 s, 11.6 MB/s 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.918 21:56:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.918 21:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.174 { 00:06:16.174 "nbd_device": "/dev/nbd0", 00:06:16.174 "bdev_name": "Malloc0" 00:06:16.174 }, 00:06:16.174 { 00:06:16.174 "nbd_device": "/dev/nbd1", 00:06:16.174 "bdev_name": "Malloc1" 00:06:16.174 } 00:06:16.174 ]' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.174 { 00:06:16.174 "nbd_device": "/dev/nbd0", 00:06:16.174 "bdev_name": "Malloc0" 00:06:16.174 }, 00:06:16.174 { 00:06:16.174 "nbd_device": "/dev/nbd1", 00:06:16.174 "bdev_name": "Malloc1" 00:06:16.174 } 00:06:16.174 ]' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.174 /dev/nbd1' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.174 /dev/nbd1' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.174 256+0 records in 00:06:16.174 256+0 records out 00:06:16.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00932792 s, 112 MB/s 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.174 256+0 records in 00:06:16.174 256+0 records out 00:06:16.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176526 s, 59.4 MB/s 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.174 21:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.175 256+0 records in 00:06:16.175 256+0 records out 00:06:16.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196569 s, 53.3 MB/s 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.175 21:56:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.431 21:56:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.687 21:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.944 21:56:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.944 21:56:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.201 21:56:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.139 [2024-12-06 21:56:50.747245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.139 [2024-12-06 21:56:50.860299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.139 [2024-12-06 21:56:50.860488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.139 [2024-12-06 21:56:51.009325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.139 [2024-12-06 21:56:51.009416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.662 21:56:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58439 /var/tmp/spdk-nbd.sock 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58439 ']' 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.662 21:56:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.662 21:56:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58439 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58439 ']' 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58439 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58439 00:06:20.662 killing process with pid 58439 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58439' 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58439 00:06:20.662 21:56:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58439 00:06:21.227 spdk_app_start is called in Round 0. 00:06:21.227 Shutdown signal received, stop current app iteration 00:06:21.227 Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 reinitialization... 00:06:21.227 spdk_app_start is called in Round 1. 00:06:21.227 Shutdown signal received, stop current app iteration 00:06:21.227 Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 reinitialization... 00:06:21.227 spdk_app_start is called in Round 2. 00:06:21.227 Shutdown signal received, stop current app iteration 00:06:21.227 Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 reinitialization... 00:06:21.227 spdk_app_start is called in Round 3. 00:06:21.227 Shutdown signal received, stop current app iteration 00:06:21.227 ************************************ 00:06:21.227 END TEST app_repeat 00:06:21.227 ************************************ 00:06:21.227 21:56:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.227 21:56:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.227 00:06:21.227 real 0m18.216s 00:06:21.227 user 0m39.486s 00:06:21.227 sys 0m2.306s 00:06:21.227 21:56:53 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.227 21:56:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.227 21:56:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.227 21:56:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.227 21:56:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.227 21:56:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.227 21:56:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.227 ************************************ 00:06:21.227 START TEST cpu_locks 00:06:21.227 ************************************ 00:06:21.227 21:56:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.227 * Looking for test storage... 00:06:21.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.227 21:56:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.227 21:56:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.227 21:56:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.227 21:56:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.227 21:56:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.486 21:56:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.486 --rc genhtml_branch_coverage=1 00:06:21.486 --rc genhtml_function_coverage=1 00:06:21.486 --rc genhtml_legend=1 00:06:21.486 --rc geninfo_all_blocks=1 00:06:21.486 --rc geninfo_unexecuted_blocks=1 00:06:21.486 00:06:21.486 ' 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.486 --rc genhtml_branch_coverage=1 00:06:21.486 --rc genhtml_function_coverage=1 00:06:21.486 --rc genhtml_legend=1 00:06:21.486 --rc geninfo_all_blocks=1 00:06:21.486 --rc geninfo_unexecuted_blocks=1 00:06:21.486 00:06:21.486 ' 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.486 --rc genhtml_branch_coverage=1 00:06:21.486 --rc genhtml_function_coverage=1 00:06:21.486 --rc genhtml_legend=1 00:06:21.486 --rc geninfo_all_blocks=1 00:06:21.486 --rc geninfo_unexecuted_blocks=1 00:06:21.486 00:06:21.486 ' 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.486 --rc genhtml_branch_coverage=1 00:06:21.486 --rc genhtml_function_coverage=1 00:06:21.486 --rc genhtml_legend=1 00:06:21.486 --rc geninfo_all_blocks=1 00:06:21.486 --rc geninfo_unexecuted_blocks=1 00:06:21.486 00:06:21.486 ' 00:06:21.486 21:56:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.486 21:56:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.486 21:56:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.486 21:56:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.486 21:56:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.486 ************************************ 00:06:21.486 START TEST default_locks 00:06:21.486 ************************************ 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58875 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58875 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.486 21:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.486 [2024-12-06 21:56:54.187256] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:21.486 [2024-12-06 21:56:54.187376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:06:21.486 [2024-12-06 21:56:54.345395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.744 [2024-12-06 21:56:54.445605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.309 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.310 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:22.310 21:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58875 00:06:22.310 21:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58875 00:06:22.310 21:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58875 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58875 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.567 killing process with pid 58875 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58875 00:06:22.567 21:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58875 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58875 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58875 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58875 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.938 ERROR: process (pid: 58875) is no longer running 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.938 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58875) - No such process 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.938 ************************************ 00:06:23.938 END TEST default_locks 00:06:23.938 ************************************ 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.938 00:06:23.938 real 0m2.677s 00:06:23.938 user 0m2.660s 00:06:23.938 sys 0m0.447s 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.938 21:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.196 21:56:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.196 21:56:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.196 21:56:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.196 21:56:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.196 ************************************ 00:06:24.196 START TEST default_locks_via_rpc 00:06:24.196 ************************************ 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58939 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58939 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58939 ']' 00:06:24.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.196 21:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.196 [2024-12-06 21:56:56.928930] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:24.196 [2024-12-06 21:56:56.929053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:06:24.454 [2024-12-06 21:56:57.087936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.454 [2024-12-06 21:56:57.189663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.018 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.018 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.018 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.018 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.018 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58939 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58939 00:06:25.276 21:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58939 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58939 ']' 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58939 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.276 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58939 00:06:25.533 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.533 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.533 killing process with pid 58939 00:06:25.533 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58939' 00:06:25.533 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58939 00:06:25.533 21:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58939 00:06:26.906 00:06:26.906 real 0m2.827s 00:06:26.906 user 0m2.834s 00:06:26.906 sys 0m0.471s 00:06:26.906 21:56:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.906 21:56:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.906 ************************************ 00:06:26.906 END TEST default_locks_via_rpc 00:06:26.906 ************************************ 00:06:26.906 21:56:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.906 21:56:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.906 21:56:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.906 21:56:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.906 ************************************ 00:06:26.906 START TEST non_locking_app_on_locked_coremask 00:06:26.906 ************************************ 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59002 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59002 /var/tmp/spdk.sock 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59002 ']' 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.906 21:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.205 [2024-12-06 21:56:59.815477] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:27.205 [2024-12-06 21:56:59.815601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:06:27.205 [2024-12-06 21:56:59.974047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.463 [2024-12-06 21:57:00.081245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59018 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59018 /var/tmp/spdk2.sock 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59018 ']' 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.028 21:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.028 [2024-12-06 21:57:00.739339] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:28.028 [2024-12-06 21:57:00.739461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:06:28.287 [2024-12-06 21:57:00.914431] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.287 [2024-12-06 21:57:00.914505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.287 [2024-12-06 21:57:01.120785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59002 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59002 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59002 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59002 ']' 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59002 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.663 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59002 00:06:29.921 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.921 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.921 killing process with pid 59002 00:06:29.921 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59002' 00:06:29.921 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59002 00:06:29.921 21:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59002 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59018 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59018 ']' 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59018 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59018 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59018' 00:06:33.200 killing process with pid 59018 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59018 00:06:33.200 21:57:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59018 00:06:34.657 00:06:34.657 real 0m7.398s 00:06:34.657 user 0m7.611s 00:06:34.657 sys 0m0.843s 00:06:34.657 21:57:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.657 ************************************ 00:06:34.657 END TEST non_locking_app_on_locked_coremask 00:06:34.657 21:57:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 ************************************ 00:06:34.657 21:57:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.657 21:57:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.657 21:57:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.657 21:57:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 ************************************ 00:06:34.657 START TEST locking_app_on_unlocked_coremask 00:06:34.657 ************************************ 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59120 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59120 /var/tmp/spdk.sock 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59120 ']' 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.657 21:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.657 [2024-12-06 21:57:07.268597] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:34.657 [2024-12-06 21:57:07.268712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:06:34.657 [2024-12-06 21:57:07.431992] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.657 [2024-12-06 21:57:07.432042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.914 [2024-12-06 21:57:07.530511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59136 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59136 /var/tmp/spdk2.sock 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59136 ']' 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.478 21:57:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.478 [2024-12-06 21:57:08.209733] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:35.478 [2024-12-06 21:57:08.209855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59136 ] 00:06:35.735 [2024-12-06 21:57:08.385433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.735 [2024-12-06 21:57:08.587700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.103 21:57:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.103 21:57:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.103 21:57:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59136 00:06:37.103 21:57:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59136 00:06:37.103 21:57:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59120 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59120 ']' 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59120 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59120 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.360 killing process with pid 59120 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59120' 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59120 00:06:37.360 21:57:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59120 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59136 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59136 ']' 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59136 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59136 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.886 killing process with pid 59136 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59136' 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59136 00:06:39.886 21:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59136 00:06:40.876 00:06:40.876 real 0m6.505s 00:06:40.876 user 0m6.774s 00:06:40.876 sys 0m0.827s 00:06:40.876 21:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.876 ************************************ 00:06:40.876 END TEST locking_app_on_unlocked_coremask 00:06:40.876 ************************************ 00:06:40.876 21:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.876 21:57:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.877 21:57:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.877 21:57:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.877 21:57:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.136 ************************************ 00:06:41.136 START TEST locking_app_on_locked_coremask 00:06:41.136 ************************************ 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59233 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59233 /var/tmp/spdk.sock 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59233 ']' 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.136 21:57:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.136 [2024-12-06 21:57:13.837536] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:41.136 [2024-12-06 21:57:13.837679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59233 ] 00:06:41.136 [2024-12-06 21:57:13.998218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.397 [2024-12-06 21:57:14.130938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59249 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59249 /var/tmp/spdk2.sock 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59249 /var/tmp/spdk2.sock 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59249 /var/tmp/spdk2.sock 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59249 ']' 00:06:42.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.336 21:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.336 [2024-12-06 21:57:14.951931] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:42.336 [2024-12-06 21:57:14.952102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:06:42.336 [2024-12-06 21:57:15.130840] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59233 has claimed it. 00:06:42.336 [2024-12-06 21:57:15.130924] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.901 ERROR: process (pid: 59249) is no longer running 00:06:42.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59249) - No such process 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59233 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59233 00:06:42.901 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59233 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59233 ']' 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59233 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59233 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.158 killing process with pid 59233 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59233' 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59233 00:06:43.158 21:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59233 00:06:45.056 00:06:45.056 real 0m3.737s 00:06:45.056 user 0m3.945s 00:06:45.056 sys 0m0.727s 00:06:45.056 21:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.056 ************************************ 00:06:45.056 END TEST locking_app_on_locked_coremask 00:06:45.056 ************************************ 00:06:45.056 21:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.056 21:57:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.056 21:57:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.056 21:57:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.056 21:57:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.056 ************************************ 00:06:45.056 START TEST locking_overlapped_coremask 00:06:45.056 ************************************ 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59307 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:06:45.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.056 21:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.056 [2024-12-06 21:57:17.628818] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:45.056 [2024-12-06 21:57:17.628945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:06:45.056 [2024-12-06 21:57:17.788499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.056 [2024-12-06 21:57:17.894223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.056 [2024-12-06 21:57:17.894376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.056 [2024-12-06 21:57:17.894476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59325 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59325 /var/tmp/spdk2.sock 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59325 /var/tmp/spdk2.sock 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.667 21:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.928 [2024-12-06 21:57:18.592635] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:45.928 [2024-12-06 21:57:18.592772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:06:45.928 [2024-12-06 21:57:18.767857] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59307 has claimed it. 00:06:45.928 [2024-12-06 21:57:18.767938] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.495 ERROR: process (pid: 59325) is no longer running 00:06:46.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59325) - No such process 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59307 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59307 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:06:46.495 killing process with pid 59307 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59307 00:06:46.495 21:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59307 00:06:47.869 00:06:47.869 real 0m3.107s 00:06:47.869 user 0m8.442s 00:06:47.869 sys 0m0.447s 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.869 ************************************ 00:06:47.869 END TEST locking_overlapped_coremask 00:06:47.869 ************************************ 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.869 21:57:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.869 21:57:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.869 21:57:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.869 21:57:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.869 ************************************ 00:06:47.869 START TEST locking_overlapped_coremask_via_rpc 00:06:47.869 ************************************ 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59378 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59378 /var/tmp/spdk.sock 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59378 ']' 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.869 21:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.128 [2024-12-06 21:57:20.808890] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:48.128 [2024-12-06 21:57:20.809042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59378 ] 00:06:48.128 [2024-12-06 21:57:20.971599] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.128 [2024-12-06 21:57:20.971685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.386 [2024-12-06 21:57:21.084316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.386 [2024-12-06 21:57:21.084685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.386 [2024-12-06 21:57:21.084789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59396 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59396 /var/tmp/spdk2.sock 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.953 21:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.953 [2024-12-06 21:57:21.796725] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:48.953 [2024-12-06 21:57:21.796854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:06:49.211 [2024-12-06 21:57:21.976700] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.211 [2024-12-06 21:57:21.976779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.469 [2024-12-06 21:57:22.192089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.469 [2024-12-06 21:57:22.192152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.469 [2024-12-06 21:57:22.192169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 [2024-12-06 21:57:23.421334] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59378 has claimed it. 00:06:50.848 request: 00:06:50.848 { 00:06:50.848 "method": "framework_enable_cpumask_locks", 00:06:50.848 "req_id": 1 00:06:50.848 } 00:06:50.848 Got JSON-RPC error response 00:06:50.848 response: 00:06:50.848 { 00:06:50.848 "code": -32603, 00:06:50.848 "message": "Failed to claim CPU core: 2" 00:06:50.848 } 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59378 /var/tmp/spdk.sock 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59378 ']' 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59396 /var/tmp/spdk2.sock 00:06:50.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.848 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.105 00:06:51.105 real 0m3.155s 00:06:51.105 user 0m1.140s 00:06:51.105 sys 0m0.138s 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.105 21:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.105 ************************************ 00:06:51.105 END TEST locking_overlapped_coremask_via_rpc 00:06:51.105 ************************************ 00:06:51.105 21:57:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.105 21:57:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59378 ]] 00:06:51.105 21:57:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59378 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59378 ']' 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59378 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59378 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.105 killing process with pid 59378 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59378' 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59378 00:06:51.105 21:57:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59378 00:06:53.002 21:57:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59396 ]] 00:06:53.002 21:57:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59396 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59396 ']' 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59396 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59396 00:06:53.002 killing process with pid 59396 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59396' 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59396 00:06:53.002 21:57:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59396 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59378 ]] 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59378 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59378 ']' 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59378 00:06:53.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59378) - No such process 00:06:53.932 Process with pid 59378 is not found 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59378 is not found' 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59396 ]] 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59396 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59396 ']' 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59396 00:06:53.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59396) - No such process 00:06:53.932 Process with pid 59396 is not found 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59396 is not found' 00:06:53.932 21:57:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.932 00:06:53.932 real 0m32.756s 00:06:53.932 user 0m55.947s 00:06:53.932 sys 0m4.777s 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.932 21:57:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.932 ************************************ 00:06:53.932 END TEST cpu_locks 00:06:53.932 ************************************ 00:06:53.932 00:06:53.932 real 1m1.089s 00:06:53.932 user 1m51.471s 00:06:53.932 sys 0m7.928s 00:06:53.932 21:57:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.932 21:57:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.932 ************************************ 00:06:53.932 END TEST event 00:06:53.932 ************************************ 00:06:53.932 21:57:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.932 21:57:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.932 21:57:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.932 21:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:53.932 ************************************ 00:06:53.932 START TEST thread 00:06:53.932 ************************************ 00:06:53.932 21:57:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.189 * Looking for test storage... 00:06:54.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:54.189 21:57:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.189 21:57:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.189 21:57:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.189 21:57:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.189 21:57:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.189 21:57:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.189 21:57:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.189 21:57:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.189 21:57:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.189 21:57:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.189 21:57:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.189 21:57:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:54.189 21:57:26 thread -- scripts/common.sh@345 -- # : 1 00:06:54.189 21:57:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.189 21:57:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.189 21:57:26 thread -- scripts/common.sh@365 -- # decimal 1 00:06:54.189 21:57:26 thread -- scripts/common.sh@353 -- # local d=1 00:06:54.189 21:57:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.189 21:57:26 thread -- scripts/common.sh@355 -- # echo 1 00:06:54.189 21:57:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.189 21:57:26 thread -- scripts/common.sh@366 -- # decimal 2 00:06:54.189 21:57:26 thread -- scripts/common.sh@353 -- # local d=2 00:06:54.189 21:57:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.189 21:57:26 thread -- scripts/common.sh@355 -- # echo 2 00:06:54.189 21:57:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.189 21:57:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.189 21:57:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.189 21:57:26 thread -- scripts/common.sh@368 -- # return 0 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:54.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.189 --rc genhtml_branch_coverage=1 00:06:54.189 --rc genhtml_function_coverage=1 00:06:54.189 --rc genhtml_legend=1 00:06:54.189 --rc geninfo_all_blocks=1 00:06:54.189 --rc geninfo_unexecuted_blocks=1 00:06:54.189 00:06:54.189 ' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:54.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.189 --rc genhtml_branch_coverage=1 00:06:54.189 --rc genhtml_function_coverage=1 00:06:54.189 --rc genhtml_legend=1 00:06:54.189 --rc geninfo_all_blocks=1 00:06:54.189 --rc geninfo_unexecuted_blocks=1 00:06:54.189 00:06:54.189 ' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:54.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.189 --rc genhtml_branch_coverage=1 00:06:54.189 --rc genhtml_function_coverage=1 00:06:54.189 --rc genhtml_legend=1 00:06:54.189 --rc geninfo_all_blocks=1 00:06:54.189 --rc geninfo_unexecuted_blocks=1 00:06:54.189 00:06:54.189 ' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:54.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.189 --rc genhtml_branch_coverage=1 00:06:54.189 --rc genhtml_function_coverage=1 00:06:54.189 --rc genhtml_legend=1 00:06:54.189 --rc geninfo_all_blocks=1 00:06:54.189 --rc geninfo_unexecuted_blocks=1 00:06:54.189 00:06:54.189 ' 00:06:54.189 21:57:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.189 21:57:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.189 ************************************ 00:06:54.189 START TEST thread_poller_perf 00:06:54.189 ************************************ 00:06:54.189 21:57:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.189 [2024-12-06 21:57:26.940705] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:54.189 [2024-12-06 21:57:26.941319] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59556 ] 00:06:54.445 [2024-12-06 21:57:27.098899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.445 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.445 [2024-12-06 21:57:27.207864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.814 [2024-12-06T21:57:28.686Z] ====================================== 00:06:55.814 [2024-12-06T21:57:28.686Z] busy:2610248742 (cyc) 00:06:55.814 [2024-12-06T21:57:28.686Z] total_run_count: 290000 00:06:55.814 [2024-12-06T21:57:28.686Z] tsc_hz: 2600000000 (cyc) 00:06:55.814 [2024-12-06T21:57:28.686Z] ====================================== 00:06:55.814 [2024-12-06T21:57:28.686Z] poller_cost: 9000 (cyc), 3461 (nsec) 00:06:55.814 00:06:55.814 real 0m1.481s 00:06:55.814 user 0m1.303s 00:06:55.814 sys 0m0.069s 00:06:55.814 21:57:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.814 21:57:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.814 ************************************ 00:06:55.814 END TEST thread_poller_perf 00:06:55.814 ************************************ 00:06:55.814 21:57:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.814 21:57:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.814 21:57:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.814 21:57:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.814 ************************************ 00:06:55.814 START TEST thread_poller_perf 00:06:55.814 ************************************ 00:06:55.814 21:57:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.814 [2024-12-06 21:57:28.453945] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:55.814 [2024-12-06 21:57:28.454057] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59593 ] 00:06:55.814 [2024-12-06 21:57:28.609567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.072 [2024-12-06 21:57:28.717160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.072 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.001 [2024-12-06T21:57:29.873Z] ====================================== 00:06:57.001 [2024-12-06T21:57:29.873Z] busy:2604006980 (cyc) 00:06:57.001 [2024-12-06T21:57:29.873Z] total_run_count: 3435000 00:06:57.001 [2024-12-06T21:57:29.873Z] tsc_hz: 2600000000 (cyc) 00:06:57.001 [2024-12-06T21:57:29.873Z] ====================================== 00:06:57.001 [2024-12-06T21:57:29.873Z] poller_cost: 758 (cyc), 291 (nsec) 00:06:57.260 00:06:57.260 real 0m1.447s 00:06:57.260 user 0m1.276s 00:06:57.260 sys 0m0.063s 00:06:57.260 21:57:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.260 21:57:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.260 ************************************ 00:06:57.260 END TEST thread_poller_perf 00:06:57.260 ************************************ 00:06:57.260 21:57:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.260 00:06:57.260 real 0m3.132s 00:06:57.260 user 0m2.691s 00:06:57.260 sys 0m0.232s 00:06:57.260 21:57:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.260 21:57:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.260 ************************************ 00:06:57.260 END TEST thread 00:06:57.260 ************************************ 00:06:57.260 21:57:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.260 21:57:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.260 21:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.260 21:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.260 21:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:57.260 ************************************ 00:06:57.260 START TEST app_cmdline 00:06:57.260 ************************************ 00:06:57.260 21:57:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.260 * Looking for test storage... 00:06:57.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.260 21:57:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.260 21:57:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.260 21:57:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.260 21:57:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.260 --rc genhtml_branch_coverage=1 00:06:57.260 --rc genhtml_function_coverage=1 00:06:57.260 --rc genhtml_legend=1 00:06:57.260 --rc geninfo_all_blocks=1 00:06:57.260 --rc geninfo_unexecuted_blocks=1 00:06:57.260 00:06:57.260 ' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.260 --rc genhtml_branch_coverage=1 00:06:57.260 --rc genhtml_function_coverage=1 00:06:57.260 --rc genhtml_legend=1 00:06:57.260 --rc geninfo_all_blocks=1 00:06:57.260 --rc geninfo_unexecuted_blocks=1 00:06:57.260 00:06:57.260 ' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.260 --rc genhtml_branch_coverage=1 00:06:57.260 --rc genhtml_function_coverage=1 00:06:57.260 --rc genhtml_legend=1 00:06:57.260 --rc geninfo_all_blocks=1 00:06:57.260 --rc geninfo_unexecuted_blocks=1 00:06:57.260 00:06:57.260 ' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.260 --rc genhtml_branch_coverage=1 00:06:57.260 --rc genhtml_function_coverage=1 00:06:57.260 --rc genhtml_legend=1 00:06:57.260 --rc geninfo_all_blocks=1 00:06:57.260 --rc geninfo_unexecuted_blocks=1 00:06:57.260 00:06:57.260 ' 00:06:57.260 21:57:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.260 21:57:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59682 00:06:57.260 21:57:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59682 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59682 ']' 00:06:57.260 21:57:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.260 21:57:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.517 [2024-12-06 21:57:30.145193] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:06:57.517 [2024-12-06 21:57:30.145529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:06:57.517 [2024-12-06 21:57:30.310088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.775 [2024-12-06 21:57:30.412101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.339 21:57:31 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.339 21:57:31 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:58.339 21:57:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:58.597 { 00:06:58.597 "version": "SPDK v25.01-pre git sha1 0f59982b6", 00:06:58.597 "fields": { 00:06:58.597 "major": 25, 00:06:58.597 "minor": 1, 00:06:58.597 "patch": 0, 00:06:58.597 "suffix": "-pre", 00:06:58.597 "commit": "0f59982b6" 00:06:58.597 } 00:06:58.597 } 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.597 21:57:31 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.597 21:57:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.597 21:57:31 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.597 21:57:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.597 21:57:31 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:58.597 21:57:31 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:58.598 21:57:31 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.598 request: 00:06:58.598 { 00:06:58.598 "method": "env_dpdk_get_mem_stats", 00:06:58.598 "req_id": 1 00:06:58.598 } 00:06:58.598 Got JSON-RPC error response 00:06:58.598 response: 00:06:58.598 { 00:06:58.598 "code": -32601, 00:06:58.598 "message": "Method not found" 00:06:58.598 } 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.855 21:57:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59682 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59682 ']' 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59682 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59682 00:06:58.855 killing process with pid 59682 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59682' 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 59682 00:06:58.855 21:57:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 59682 00:07:00.226 00:07:00.226 real 0m3.092s 00:07:00.226 user 0m3.424s 00:07:00.226 sys 0m0.418s 00:07:00.226 21:57:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.226 21:57:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.226 ************************************ 00:07:00.226 END TEST app_cmdline 00:07:00.226 ************************************ 00:07:00.226 21:57:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.226 21:57:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.226 21:57:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.226 21:57:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.226 ************************************ 00:07:00.226 START TEST version 00:07:00.226 ************************************ 00:07:00.226 21:57:33 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.485 * Looking for test storage... 00:07:00.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.485 21:57:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.485 21:57:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.485 21:57:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.485 21:57:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.485 21:57:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.485 21:57:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.485 21:57:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.485 21:57:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.485 21:57:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.485 21:57:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.485 21:57:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.485 21:57:33 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.485 21:57:33 version -- scripts/common.sh@345 -- # : 1 00:07:00.485 21:57:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.485 21:57:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.485 21:57:33 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.485 21:57:33 version -- scripts/common.sh@353 -- # local d=1 00:07:00.485 21:57:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.485 21:57:33 version -- scripts/common.sh@355 -- # echo 1 00:07:00.485 21:57:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.485 21:57:33 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.485 21:57:33 version -- scripts/common.sh@353 -- # local d=2 00:07:00.485 21:57:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.485 21:57:33 version -- scripts/common.sh@355 -- # echo 2 00:07:00.485 21:57:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.485 21:57:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.485 21:57:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.485 21:57:33 version -- scripts/common.sh@368 -- # return 0 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.485 --rc genhtml_branch_coverage=1 00:07:00.485 --rc genhtml_function_coverage=1 00:07:00.485 --rc genhtml_legend=1 00:07:00.485 --rc geninfo_all_blocks=1 00:07:00.485 --rc geninfo_unexecuted_blocks=1 00:07:00.485 00:07:00.485 ' 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.485 --rc genhtml_branch_coverage=1 00:07:00.485 --rc genhtml_function_coverage=1 00:07:00.485 --rc genhtml_legend=1 00:07:00.485 --rc geninfo_all_blocks=1 00:07:00.485 --rc geninfo_unexecuted_blocks=1 00:07:00.485 00:07:00.485 ' 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.485 --rc genhtml_branch_coverage=1 00:07:00.485 --rc genhtml_function_coverage=1 00:07:00.485 --rc genhtml_legend=1 00:07:00.485 --rc geninfo_all_blocks=1 00:07:00.485 --rc geninfo_unexecuted_blocks=1 00:07:00.485 00:07:00.485 ' 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.485 --rc genhtml_branch_coverage=1 00:07:00.485 --rc genhtml_function_coverage=1 00:07:00.485 --rc genhtml_legend=1 00:07:00.485 --rc geninfo_all_blocks=1 00:07:00.485 --rc geninfo_unexecuted_blocks=1 00:07:00.485 00:07:00.485 ' 00:07:00.485 21:57:33 version -- app/version.sh@17 -- # get_header_version major 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # cut -f2 00:07:00.485 21:57:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.485 21:57:33 version -- app/version.sh@17 -- # major=25 00:07:00.485 21:57:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # cut -f2 00:07:00.485 21:57:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.485 21:57:33 version -- app/version.sh@18 -- # minor=1 00:07:00.485 21:57:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.485 21:57:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # cut -f2 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.485 21:57:33 version -- app/version.sh@19 -- # patch=0 00:07:00.485 21:57:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # cut -f2 00:07:00.485 21:57:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.485 21:57:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.485 21:57:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.485 21:57:33 version -- app/version.sh@22 -- # version=25.1 00:07:00.485 21:57:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.485 21:57:33 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.485 21:57:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.485 21:57:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.485 21:57:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.485 21:57:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.485 ************************************ 00:07:00.485 END TEST version 00:07:00.485 ************************************ 00:07:00.485 00:07:00.485 real 0m0.201s 00:07:00.485 user 0m0.136s 00:07:00.485 sys 0m0.094s 00:07:00.485 21:57:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.485 21:57:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.485 21:57:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.485 21:57:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:00.485 21:57:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:00.485 21:57:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:00.485 21:57:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.485 21:57:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.485 21:57:33 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:00.485 21:57:33 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.485 21:57:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.485 21:57:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.485 21:57:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.485 ************************************ 00:07:00.485 START TEST blockdev_nvme 00:07:00.485 ************************************ 00:07:00.485 21:57:33 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.743 * Looking for test storage... 00:07:00.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.743 21:57:33 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.743 21:57:33 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.743 21:57:33 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.744 21:57:33 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.744 --rc genhtml_branch_coverage=1 00:07:00.744 --rc genhtml_function_coverage=1 00:07:00.744 --rc genhtml_legend=1 00:07:00.744 --rc geninfo_all_blocks=1 00:07:00.744 --rc geninfo_unexecuted_blocks=1 00:07:00.744 00:07:00.744 ' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.744 --rc genhtml_branch_coverage=1 00:07:00.744 --rc genhtml_function_coverage=1 00:07:00.744 --rc genhtml_legend=1 00:07:00.744 --rc geninfo_all_blocks=1 00:07:00.744 --rc geninfo_unexecuted_blocks=1 00:07:00.744 00:07:00.744 ' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.744 --rc genhtml_branch_coverage=1 00:07:00.744 --rc genhtml_function_coverage=1 00:07:00.744 --rc genhtml_legend=1 00:07:00.744 --rc geninfo_all_blocks=1 00:07:00.744 --rc geninfo_unexecuted_blocks=1 00:07:00.744 00:07:00.744 ' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.744 --rc genhtml_branch_coverage=1 00:07:00.744 --rc genhtml_function_coverage=1 00:07:00.744 --rc genhtml_legend=1 00:07:00.744 --rc geninfo_all_blocks=1 00:07:00.744 --rc geninfo_unexecuted_blocks=1 00:07:00.744 00:07:00.744 ' 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.744 21:57:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:00.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59854 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59854 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59854 ']' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.744 21:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.744 21:57:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.744 [2024-12-06 21:57:33.581836] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:00.744 [2024-12-06 21:57:33.581957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:07:01.002 [2024-12-06 21:57:33.743092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.002 [2024-12-06 21:57:33.843307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.567 21:57:34 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.567 21:57:34 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:01.567 21:57:34 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:01.567 21:57:34 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:01.567 21:57:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:01.567 21:57:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:01.567 21:57:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.825 21:57:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:01.825 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.825 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:02.084 21:57:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:02.084 21:57:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:02.085 21:57:34 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f3a26ccb-b87d-4c3d-87e5-ff91a8fcd820"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f3a26ccb-b87d-4c3d-87e5-ff91a8fcd820",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "aaa1fc5f-81e4-43b2-85bc-771da1466a02"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aaa1fc5f-81e4-43b2-85bc-771da1466a02",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "232328aa-7df5-4584-ad1f-966310ac2a73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "232328aa-7df5-4584-ad1f-966310ac2a73",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4928bb93-32a9-4cf9-958c-8721e6edcc73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4928bb93-32a9-4cf9-958c-8721e6edcc73",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "267f4adc-44bb-475a-a912-c16bd1f58ce4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "267f4adc-44bb-475a-a912-c16bd1f58ce4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "896a9ffe-180f-4bf5-b806-9b297b49f8fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "896a9ffe-180f-4bf5-b806-9b297b49f8fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:02.085 21:57:34 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:02.085 21:57:34 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:02.085 21:57:34 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:02.085 21:57:34 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59854 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59854 ']' 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59854 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59854 00:07:02.085 killing process with pid 59854 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59854' 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59854 00:07:02.085 21:57:34 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59854 00:07:03.982 21:57:36 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.982 21:57:36 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.982 21:57:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:03.982 21:57:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.982 21:57:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 ************************************ 00:07:03.982 START TEST bdev_hello_world 00:07:03.982 ************************************ 00:07:03.982 21:57:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.982 [2024-12-06 21:57:36.518535] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:03.982 [2024-12-06 21:57:36.518655] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 00:07:03.982 [2024-12-06 21:57:36.676970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.982 [2024-12-06 21:57:36.773478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.634 [2024-12-06 21:57:37.315593] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:04.634 [2024-12-06 21:57:37.315640] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:04.634 [2024-12-06 21:57:37.315659] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:04.634 [2024-12-06 21:57:37.318113] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:04.634 [2024-12-06 21:57:37.318750] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:04.634 [2024-12-06 21:57:37.318777] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:04.634 [2024-12-06 21:57:37.319107] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:04.634 00:07:04.634 [2024-12-06 21:57:37.319129] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:05.258 00:07:05.258 real 0m1.662s 00:07:05.258 user 0m1.366s 00:07:05.258 sys 0m0.190s 00:07:05.258 21:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.258 21:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:05.258 ************************************ 00:07:05.258 END TEST bdev_hello_world 00:07:05.258 ************************************ 00:07:05.517 21:57:38 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:05.517 21:57:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.517 21:57:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.517 21:57:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.517 ************************************ 00:07:05.517 START TEST bdev_bounds 00:07:05.517 ************************************ 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59974 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.517 Process bdevio pid: 59974 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59974' 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59974 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.517 21:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:05.517 [2024-12-06 21:57:38.253800] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:05.517 [2024-12-06 21:57:38.253917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:07:05.775 [2024-12-06 21:57:38.415454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.775 [2024-12-06 21:57:38.517357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.775 [2024-12-06 21:57:38.517658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.775 [2024-12-06 21:57:38.517661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.341 21:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.341 21:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:06.341 21:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:06.341 I/O targets: 00:07:06.341 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:06.341 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:06.341 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.341 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.341 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.341 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:06.341 00:07:06.341 00:07:06.341 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.341 http://cunit.sourceforge.net/ 00:07:06.341 00:07:06.341 00:07:06.341 Suite: bdevio tests on: Nvme3n1 00:07:06.341 Test: blockdev write read block ...passed 00:07:06.599 Test: blockdev write zeroes read block ...passed 00:07:06.599 Test: blockdev write zeroes read no split ...passed 00:07:06.599 Test: blockdev write zeroes read split ...passed 00:07:06.599 Test: blockdev write zeroes read split partial ...passed 00:07:06.599 Test: blockdev reset ...[2024-12-06 21:57:39.268597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:06.599 [2024-12-06 21:57:39.272822] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:06.599 passed 00:07:06.599 Test: blockdev write read 8 blocks ...passed 00:07:06.599 Test: blockdev write read size > 128k ...passed 00:07:06.599 Test: blockdev write read invalid size ...passed 00:07:06.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.599 Test: blockdev write read max offset ...passed 00:07:06.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.599 Test: blockdev writev readv 8 blocks ...passed 00:07:06.599 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.599 Test: blockdev writev readv block ...passed 00:07:06.599 Test: blockdev writev readv size > 128k ...passed 00:07:06.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.599 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.279827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b700a000 len:0x1000 00:07:06.599 [2024-12-06 21:57:39.279881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.599 passed 00:07:06.599 Test: blockdev nvme passthru rw ...passed 00:07:06.599 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.599 Test: blockdev nvme admin passthru ...[2024-12-06 21:57:39.280375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.599 [2024-12-06 21:57:39.280406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.599 passed 00:07:06.599 Test: blockdev copy ...passed 00:07:06.599 Suite: bdevio tests on: Nvme2n3 00:07:06.599 Test: blockdev write read block ...passed 00:07:06.599 Test: blockdev write zeroes read block ...passed 00:07:06.599 Test: blockdev write zeroes read no split ...passed 00:07:06.599 Test: blockdev write zeroes read split ...passed 00:07:06.599 Test: blockdev write zeroes read split partial ...passed 00:07:06.599 Test: blockdev reset ...[2024-12-06 21:57:39.400352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.599 [2024-12-06 21:57:39.403472] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.599 passed 00:07:06.599 Test: blockdev write read 8 blocks ...passed 00:07:06.599 Test: blockdev write read size > 128k ...passed 00:07:06.599 Test: blockdev write read invalid size ...passed 00:07:06.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.599 Test: blockdev write read max offset ...passed 00:07:06.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.599 Test: blockdev writev readv 8 blocks ...passed 00:07:06.599 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.599 Test: blockdev writev readv block ...passed 00:07:06.599 Test: blockdev writev readv size > 128k ...passed 00:07:06.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.599 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.410786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb406000 len:0x1000 00:07:06.599 [2024-12-06 21:57:39.410827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.599 passed 00:07:06.599 Test: blockdev nvme passthru rw ...passed 00:07:06.599 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:57:39.411325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.599 [2024-12-06 21:57:39.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.599 passed 00:07:06.599 Test: blockdev nvme admin passthru ...passed 00:07:06.600 Test: blockdev copy ...passed 00:07:06.600 Suite: bdevio tests on: Nvme2n2 00:07:06.600 Test: blockdev write read block ...passed 00:07:06.857 Test: blockdev write zeroes read block ...passed 00:07:06.857 Test: blockdev write zeroes read no split ...passed 00:07:06.857 Test: blockdev write zeroes read split ...passed 00:07:06.857 Test: blockdev write zeroes read split partial ...passed 00:07:06.857 Test: blockdev reset ...[2024-12-06 21:57:39.518693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.857 [2024-12-06 21:57:39.521815] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.857 passed 00:07:06.857 Test: blockdev write read 8 blocks ...passed 00:07:06.857 Test: blockdev write read size > 128k ...passed 00:07:06.857 Test: blockdev write read invalid size ...passed 00:07:06.857 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.857 Test: blockdev write read max offset ...passed 00:07:06.857 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.857 Test: blockdev writev readv 8 blocks ...passed 00:07:06.857 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.857 Test: blockdev writev readv block ...passed 00:07:06.857 Test: blockdev writev readv size > 128k ...passed 00:07:06.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.857 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.529010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d723c000 len:0x1000 00:07:06.857 [2024-12-06 21:57:39.529151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.857 passed 00:07:06.858 Test: blockdev nvme passthru rw ...passed 00:07:06.858 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:57:39.529890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.858 [2024-12-06 21:57:39.529987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:06.858 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:06.858 passed 00:07:06.858 Test: blockdev copy ...passed 00:07:06.858 Suite: bdevio tests on: Nvme2n1 00:07:06.858 Test: blockdev write read block ...passed 00:07:06.858 Test: blockdev write zeroes read block ...passed 00:07:06.858 Test: blockdev write zeroes read no split ...passed 00:07:06.858 Test: blockdev write zeroes read split ...passed 00:07:06.858 Test: blockdev write zeroes read split partial ...passed 00:07:06.858 Test: blockdev reset ...[2024-12-06 21:57:39.648790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.858 [2024-12-06 21:57:39.652034] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.858 passed 00:07:06.858 Test: blockdev write read 8 blocks ...passed 00:07:06.858 Test: blockdev write read size > 128k ...passed 00:07:06.858 Test: blockdev write read invalid size ...passed 00:07:06.858 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.858 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.858 Test: blockdev write read max offset ...passed 00:07:06.858 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.858 Test: blockdev writev readv 8 blocks ...passed 00:07:06.858 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.858 Test: blockdev writev readv block ...passed 00:07:06.858 Test: blockdev writev readv size > 128k ...passed 00:07:06.858 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.858 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.659384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7238000 len:0x1000 00:07:06.858 [2024-12-06 21:57:39.659426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.858 passed 00:07:06.858 Test: blockdev nvme passthru rw ...passed 00:07:06.858 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:57:39.660375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.858 [2024-12-06 21:57:39.660402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.858 passed 00:07:06.858 Test: blockdev nvme admin passthru ...passed 00:07:06.858 Test: blockdev copy ...passed 00:07:06.858 Suite: bdevio tests on: Nvme1n1 00:07:06.858 Test: blockdev write read block ...passed 00:07:07.115 Test: blockdev write zeroes read block ...passed 00:07:07.115 Test: blockdev write zeroes read no split ...passed 00:07:07.115 Test: blockdev write zeroes read split ...passed 00:07:07.115 Test: blockdev write zeroes read split partial ...passed 00:07:07.115 Test: blockdev reset ...[2024-12-06 21:57:39.773786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:07.115 [2024-12-06 21:57:39.777656] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:07.115 passed 00:07:07.115 Test: blockdev write read 8 blocks ...passed 00:07:07.115 Test: blockdev write read size > 128k ...passed 00:07:07.115 Test: blockdev write read invalid size ...passed 00:07:07.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.115 Test: blockdev write read max offset ...passed 00:07:07.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.115 Test: blockdev writev readv 8 blocks ...passed 00:07:07.115 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.115 Test: blockdev writev readv block ...passed 00:07:07.115 Test: blockdev writev readv size > 128k ...passed 00:07:07.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.115 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.786741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:07.115 Test: blockdev nvme passthru rw ...passed 00:07:07.115 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.115 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2d7234000 len:0x1000 00:07:07.115 [2024-12-06 21:57:39.786857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.115 [2024-12-06 21:57:39.787280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.115 [2024-12-06 21:57:39.787305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.115 passed 00:07:07.115 Test: blockdev copy ...passed 00:07:07.115 Suite: bdevio tests on: Nvme0n1 00:07:07.115 Test: blockdev write read block ...passed 00:07:07.115 Test: blockdev write zeroes read block ...passed 00:07:07.115 Test: blockdev write zeroes read no split ...passed 00:07:07.115 Test: blockdev write zeroes read split ...passed 00:07:07.115 Test: blockdev write zeroes read split partial ...passed 00:07:07.115 Test: blockdev reset ...[2024-12-06 21:57:39.974524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:07.115 [2024-12-06 21:57:39.977207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:07.115 passed 00:07:07.116 Test: blockdev write read 8 blocks ...passed 00:07:07.116 Test: blockdev write read size > 128k ...passed 00:07:07.116 Test: blockdev write read invalid size ...passed 00:07:07.116 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.116 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.116 Test: blockdev write read max offset ...passed 00:07:07.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.116 Test: blockdev writev readv 8 blocks ...passed 00:07:07.116 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.116 Test: blockdev writev readv block ...passed 00:07:07.116 Test: blockdev writev readv size > 128k ...passed 00:07:07.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.373 Test: blockdev comparev and writev ...[2024-12-06 21:57:39.986355] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassedince it has 00:07:07.373 separate metadata which is not supported yet. 00:07:07.373 00:07:07.373 Test: blockdev nvme passthru rw ...passed 00:07:07.373 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:57:39.986997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:07.373 [2024-12-06 21:57:39.987131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:07.373 passed 00:07:07.373 Test: blockdev nvme admin passthru ...passed 00:07:07.373 Test: blockdev copy ...passed 00:07:07.373 00:07:07.373 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.373 suites 6 6 n/a 0 0 00:07:07.373 tests 138 138 138 0 0 00:07:07.373 asserts 893 893 893 0 n/a 00:07:07.373 00:07:07.373 Elapsed time = 1.892 seconds 00:07:07.373 0 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59974 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59974 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:07:07.373 killing process with pid 59974 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59974 00:07:07.373 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59974 00:07:07.939 21:57:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:07.939 00:07:07.939 real 0m2.519s 00:07:07.939 user 0m6.196s 00:07:07.939 sys 0m0.281s 00:07:07.939 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.939 21:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.939 ************************************ 00:07:07.939 END TEST bdev_bounds 00:07:07.939 ************************************ 00:07:07.939 21:57:40 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.939 21:57:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.939 21:57:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.939 21:57:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.939 ************************************ 00:07:07.939 START TEST bdev_nbd 00:07:07.939 ************************************ 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:07.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60034 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60034 /var/tmp/spdk-nbd.sock 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60034 ']' 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.939 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.940 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.940 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.940 21:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 [2024-12-06 21:57:40.828992] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:08.198 [2024-12-06 21:57:40.829107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.198 [2024-12-06 21:57:40.990991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.457 [2024-12-06 21:57:41.091936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.022 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.280 1+0 records in 00:07:09.280 1+0 records out 00:07:09.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127661 s, 3.2 MB/s 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.280 21:57:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:09.280 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:09.280 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:09.537 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.538 1+0 records in 00:07:09.538 1+0 records out 00:07:09.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722234 s, 5.7 MB/s 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.538 1+0 records in 00:07:09.538 1+0 records out 00:07:09.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843553 s, 4.9 MB/s 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.538 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.796 1+0 records in 00:07:09.796 1+0 records out 00:07:09.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886156 s, 4.6 MB/s 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.796 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:10.054 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:10.054 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:10.054 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:10.054 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:10.054 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.055 1+0 records in 00:07:10.055 1+0 records out 00:07:10.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569626 s, 7.2 MB/s 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.055 21:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.313 1+0 records in 00:07:10.313 1+0 records out 00:07:10.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467386 s, 8.8 MB/s 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.313 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.571 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:10.571 { 00:07:10.572 "nbd_device": "/dev/nbd0", 00:07:10.572 "bdev_name": "Nvme0n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd1", 00:07:10.572 "bdev_name": "Nvme1n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd2", 00:07:10.572 "bdev_name": "Nvme2n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd3", 00:07:10.572 "bdev_name": "Nvme2n2" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd4", 00:07:10.572 "bdev_name": "Nvme2n3" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd5", 00:07:10.572 "bdev_name": "Nvme3n1" 00:07:10.572 } 00:07:10.572 ]' 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd0", 00:07:10.572 "bdev_name": "Nvme0n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd1", 00:07:10.572 "bdev_name": "Nvme1n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd2", 00:07:10.572 "bdev_name": "Nvme2n1" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd3", 00:07:10.572 "bdev_name": "Nvme2n2" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd4", 00:07:10.572 "bdev_name": "Nvme2n3" 00:07:10.572 }, 00:07:10.572 { 00:07:10.572 "nbd_device": "/dev/nbd5", 00:07:10.572 "bdev_name": "Nvme3n1" 00:07:10.572 } 00:07:10.572 ]' 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.572 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.829 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.087 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.394 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:11.394 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.394 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.394 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.394 21:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.394 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.672 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.930 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.188 21:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:12.188 /dev/nbd0 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.444 1+0 records in 00:07:12.444 1+0 records out 00:07:12.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099571 s, 4.1 MB/s 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.444 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:12.445 /dev/nbd1 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.445 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.701 1+0 records in 00:07:12.701 1+0 records out 00:07:12.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106907 s, 3.8 MB/s 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:12.701 /dev/nbd10 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.701 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.702 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.702 1+0 records in 00:07:12.702 1+0 records out 00:07:12.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634721 s, 6.5 MB/s 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:12.958 /dev/nbd11 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.958 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.959 1+0 records in 00:07:12.959 1+0 records out 00:07:12.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402796 s, 10.2 MB/s 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.959 21:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:13.215 /dev/nbd12 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.215 1+0 records in 00:07:13.215 1+0 records out 00:07:13.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0018114 s, 2.3 MB/s 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.215 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:13.472 /dev/nbd13 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.472 1+0 records in 00:07:13.472 1+0 records out 00:07:13.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116868 s, 3.5 MB/s 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.472 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd0", 00:07:13.729 "bdev_name": "Nvme0n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd1", 00:07:13.729 "bdev_name": "Nvme1n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd10", 00:07:13.729 "bdev_name": "Nvme2n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd11", 00:07:13.729 "bdev_name": "Nvme2n2" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd12", 00:07:13.729 "bdev_name": "Nvme2n3" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd13", 00:07:13.729 "bdev_name": "Nvme3n1" 00:07:13.729 } 00:07:13.729 ]' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd0", 00:07:13.729 "bdev_name": "Nvme0n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd1", 00:07:13.729 "bdev_name": "Nvme1n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd10", 00:07:13.729 "bdev_name": "Nvme2n1" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd11", 00:07:13.729 "bdev_name": "Nvme2n2" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd12", 00:07:13.729 "bdev_name": "Nvme2n3" 00:07:13.729 }, 00:07:13.729 { 00:07:13.729 "nbd_device": "/dev/nbd13", 00:07:13.729 "bdev_name": "Nvme3n1" 00:07:13.729 } 00:07:13.729 ]' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.729 /dev/nbd1 00:07:13.729 /dev/nbd10 00:07:13.729 /dev/nbd11 00:07:13.729 /dev/nbd12 00:07:13.729 /dev/nbd13' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.729 /dev/nbd1 00:07:13.729 /dev/nbd10 00:07:13.729 /dev/nbd11 00:07:13.729 /dev/nbd12 00:07:13.729 /dev/nbd13' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:13.729 256+0 records in 00:07:13.729 256+0 records out 00:07:13.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100516 s, 104 MB/s 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.729 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.987 256+0 records in 00:07:13.987 256+0 records out 00:07:13.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0626777 s, 16.7 MB/s 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.987 256+0 records in 00:07:13.987 256+0 records out 00:07:13.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0665766 s, 15.7 MB/s 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:13.987 256+0 records in 00:07:13.987 256+0 records out 00:07:13.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0650391 s, 16.1 MB/s 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:13.987 256+0 records in 00:07:13.987 256+0 records out 00:07:13.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0655337 s, 16.0 MB/s 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.987 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:14.245 256+0 records in 00:07:14.245 256+0 records out 00:07:14.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639029 s, 16.4 MB/s 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:14.245 256+0 records in 00:07:14.245 256+0 records out 00:07:14.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643826 s, 16.3 MB/s 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.245 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.502 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.759 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.015 21:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.272 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.529 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:15.785 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:16.042 malloc_lvol_verify 00:07:16.042 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.299 13016be6-ee0d-484b-a1e9-23b31d5b69fd 00:07:16.299 21:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.299 73a4454c-335f-49e2-9bbe-7c68da9f26e1 00:07:16.299 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:16.557 /dev/nbd0 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:16.557 mke2fs 1.47.0 (5-Feb-2023) 00:07:16.557 Discarding device blocks: 0/4096 done 00:07:16.557 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:16.557 00:07:16.557 Allocating group tables: 0/1 done 00:07:16.557 Writing inode tables: 0/1 done 00:07:16.557 Creating journal (1024 blocks): done 00:07:16.557 Writing superblocks and filesystem accounting information: 0/1 done 00:07:16.557 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.557 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60034 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60034 ']' 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60034 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60034 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.814 killing process with pid 60034 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60034' 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60034 00:07:16.814 21:57:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60034 00:07:17.741 21:57:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:17.741 00:07:17.741 real 0m9.671s 00:07:17.741 user 0m13.844s 00:07:17.741 sys 0m3.044s 00:07:17.741 21:57:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.741 ************************************ 00:07:17.741 END TEST bdev_nbd 00:07:17.741 ************************************ 00:07:17.741 21:57:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:17.741 21:57:50 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:17.741 21:57:50 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:17.741 skipping fio tests on NVMe due to multi-ns failures. 00:07:17.741 21:57:50 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:17.741 21:57:50 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:17.741 21:57:50 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.741 21:57:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:17.741 21:57:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.741 21:57:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.741 ************************************ 00:07:17.741 START TEST bdev_verify 00:07:17.741 ************************************ 00:07:17.741 21:57:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.741 [2024-12-06 21:57:50.557914] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:17.741 [2024-12-06 21:57:50.558032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:07:17.997 [2024-12-06 21:57:50.717798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.997 [2024-12-06 21:57:50.821316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.997 [2024-12-06 21:57:50.821484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.575 Running I/O for 5 seconds... 00:07:20.922 18816.00 IOPS, 73.50 MiB/s [2024-12-06T21:57:54.726Z] 18208.00 IOPS, 71.12 MiB/s [2024-12-06T21:57:55.673Z] 18986.67 IOPS, 74.17 MiB/s [2024-12-06T21:57:56.605Z] 19232.00 IOPS, 75.12 MiB/s [2024-12-06T21:57:56.605Z] 19545.60 IOPS, 76.35 MiB/s 00:07:23.733 Latency(us) 00:07:23.733 [2024-12-06T21:57:56.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.733 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0xbd0bd 00:07:23.733 Nvme0n1 : 5.06 1568.35 6.13 0.00 0.00 81408.91 16131.94 216167.98 00:07:23.733 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:23.733 Nvme0n1 : 5.07 1667.82 6.51 0.00 0.00 76557.03 13006.38 95985.03 00:07:23.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0xa0000 00:07:23.733 Nvme1n1 : 5.06 1567.90 6.12 0.00 0.00 81350.31 17039.36 208102.01 00:07:23.733 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0xa0000 length 0xa0000 00:07:23.733 Nvme1n1 : 5.07 1667.11 6.51 0.00 0.00 76296.87 14115.45 80256.39 00:07:23.733 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0x80000 00:07:23.733 Nvme2n1 : 5.06 1566.94 6.12 0.00 0.00 81171.53 17946.78 197616.25 00:07:23.733 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x80000 length 0x80000 00:07:23.733 Nvme2n1 : 5.07 1665.81 6.51 0.00 0.00 76127.95 16434.41 68157.44 00:07:23.733 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0x80000 00:07:23.733 Nvme2n2 : 5.07 1566.30 6.12 0.00 0.00 81059.84 18551.73 191163.47 00:07:23.733 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x80000 length 0x80000 00:07:23.733 Nvme2n2 : 5.07 1665.34 6.51 0.00 0.00 75998.49 16031.11 64527.75 00:07:23.733 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0x80000 00:07:23.733 Nvme2n3 : 5.07 1565.55 6.12 0.00 0.00 80945.42 19459.15 204069.02 00:07:23.733 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x80000 length 0x80000 00:07:23.733 Nvme2n3 : 5.07 1664.87 6.50 0.00 0.00 75878.63 15022.87 65334.35 00:07:23.733 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x0 length 0x20000 00:07:23.733 Nvme3n1 : 5.07 1565.11 6.11 0.00 0.00 80796.48 16031.11 216167.98 00:07:23.733 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:23.733 Verification LBA range: start 0x20000 length 0x20000 00:07:23.733 Nvme3n1 : 5.08 1675.00 6.54 0.00 0.00 75380.62 2848.30 68964.04 00:07:23.733 [2024-12-06T21:57:56.605Z] =================================================================================================================== 00:07:23.733 [2024-12-06T21:57:56.605Z] Total : 19406.11 75.81 0.00 0.00 78499.57 2848.30 216167.98 00:07:25.105 00:07:25.105 real 0m7.173s 00:07:25.105 user 0m13.398s 00:07:25.105 sys 0m0.224s 00:07:25.105 ************************************ 00:07:25.105 END TEST bdev_verify 00:07:25.105 ************************************ 00:07:25.105 21:57:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.105 21:57:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.105 21:57:57 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:25.105 21:57:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:25.105 21:57:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.105 21:57:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.105 ************************************ 00:07:25.105 START TEST bdev_verify_big_io 00:07:25.105 ************************************ 00:07:25.106 21:57:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:25.106 [2024-12-06 21:57:57.781705] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:25.106 [2024-12-06 21:57:57.781820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:07:25.106 [2024-12-06 21:57:57.942811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.363 [2024-12-06 21:57:58.047614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.363 [2024-12-06 21:57:58.047729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.928 Running I/O for 5 seconds... 00:07:30.397 16.00 IOPS, 1.00 MiB/s [2024-12-06T21:58:04.640Z] 1688.50 IOPS, 105.53 MiB/s [2024-12-06T21:58:04.898Z] 2184.33 IOPS, 136.52 MiB/s 00:07:32.026 Latency(us) 00:07:32.026 [2024-12-06T21:58:04.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0xbd0b 00:07:32.026 Nvme0n1 : 5.68 113.89 7.12 0.00 0.00 1038088.84 27021.00 1213121.77 00:07:32.026 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:32.026 Nvme0n1 : 5.78 115.08 7.19 0.00 0.00 1051198.20 22483.89 1213121.77 00:07:32.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0xa000 00:07:32.026 Nvme1n1 : 5.90 125.98 7.87 0.00 0.00 951951.82 59688.17 1000180.18 00:07:32.026 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0xa000 length 0xa000 00:07:32.026 Nvme1n1 : 5.90 119.36 7.46 0.00 0.00 990731.31 57268.38 1000180.18 00:07:32.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0x8000 00:07:32.026 Nvme2n1 : 5.90 126.38 7.90 0.00 0.00 915411.28 59688.17 903388.55 00:07:32.026 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x8000 length 0x8000 00:07:32.026 Nvme2n1 : 6.02 123.31 7.71 0.00 0.00 929386.90 72593.72 851766.35 00:07:32.026 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0x8000 00:07:32.026 Nvme2n2 : 5.91 130.03 8.13 0.00 0.00 862903.01 61704.66 903388.55 00:07:32.026 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x8000 length 0x8000 00:07:32.026 Nvme2n2 : 6.02 127.58 7.97 0.00 0.00 872438.55 42346.34 871124.68 00:07:32.026 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0x8000 00:07:32.026 Nvme2n3 : 5.97 133.15 8.32 0.00 0.00 808810.50 61704.66 903388.55 00:07:32.026 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x8000 length 0x8000 00:07:32.026 Nvme2n3 : 6.07 123.96 7.75 0.00 0.00 864208.67 43757.88 1664816.05 00:07:32.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x0 length 0x2000 00:07:32.026 Nvme3n1 : 6.08 157.98 9.87 0.00 0.00 663013.53 633.30 909841.33 00:07:32.026 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:32.026 Verification LBA range: start 0x2000 length 0x2000 00:07:32.026 Nvme3n1 : 6.09 139.39 8.71 0.00 0.00 743436.32 1197.29 1703532.70 00:07:32.026 [2024-12-06T21:58:04.898Z] =================================================================================================================== 00:07:32.026 [2024-12-06T21:58:04.898Z] Total : 1536.08 96.01 0.00 0.00 879885.02 633.30 1703532.70 00:07:33.916 ************************************ 00:07:33.916 END TEST bdev_verify_big_io 00:07:33.916 00:07:33.916 real 0m8.770s 00:07:33.916 user 0m16.582s 00:07:33.916 sys 0m0.224s 00:07:33.916 21:58:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.916 21:58:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:33.916 ************************************ 00:07:33.916 21:58:06 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.916 21:58:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.916 21:58:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.916 21:58:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.916 ************************************ 00:07:33.916 START TEST bdev_write_zeroes 00:07:33.916 ************************************ 00:07:33.916 21:58:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.916 [2024-12-06 21:58:06.598972] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:33.916 [2024-12-06 21:58:06.599096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:07:33.916 [2024-12-06 21:58:06.760136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.173 [2024-12-06 21:58:06.871049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.790 Running I/O for 1 seconds... 00:07:36.282 2083.00 IOPS, 8.14 MiB/s 00:07:36.282 Latency(us) 00:07:36.282 [2024-12-06T21:58:09.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.282 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme0n1 : 1.49 194.77 0.76 0.00 0.00 548921.00 9981.64 1355082.83 00:07:36.282 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme1n1 : 1.10 466.40 1.82 0.00 0.00 273771.91 105664.20 395232.49 00:07:36.282 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme2n1 : 1.10 465.90 1.82 0.00 0.00 272968.47 105664.20 395232.49 00:07:36.282 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme2n2 : 1.10 465.45 1.82 0.00 0.00 272160.30 105664.20 395232.49 00:07:36.282 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme2n3 : 1.10 467.42 1.83 0.00 0.00 271311.16 105664.20 392006.10 00:07:36.282 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.282 Nvme3n1 : 1.10 466.91 1.82 0.00 0.00 271036.26 101227.91 461373.44 00:07:36.282 [2024-12-06T21:58:09.154Z] =================================================================================================================== 00:07:36.282 [2024-12-06T21:58:09.154Z] Total : 2526.84 9.87 0.00 0.00 300489.31 9981.64 1355082.83 00:07:39.563 ************************************ 00:07:39.563 END TEST bdev_write_zeroes 00:07:39.563 ************************************ 00:07:39.563 00:07:39.563 real 0m5.480s 00:07:39.563 user 0m5.025s 00:07:39.563 sys 0m0.335s 00:07:39.563 21:58:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.563 21:58:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 21:58:12 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.563 21:58:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:39.563 21:58:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.563 21:58:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 START TEST bdev_json_nonenclosed 00:07:39.563 ************************************ 00:07:39.563 21:58:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.563 [2024-12-06 21:58:12.131568] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:39.563 [2024-12-06 21:58:12.131687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:07:39.563 [2024-12-06 21:58:12.294250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.563 [2024-12-06 21:58:12.398639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.563 [2024-12-06 21:58:12.398875] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:39.563 [2024-12-06 21:58:12.398899] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:39.563 [2024-12-06 21:58:12.398909] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.822 ************************************ 00:07:39.822 END TEST bdev_json_nonenclosed 00:07:39.822 ************************************ 00:07:39.822 00:07:39.822 real 0m0.514s 00:07:39.822 user 0m0.309s 00:07:39.822 sys 0m0.099s 00:07:39.822 21:58:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.822 21:58:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:39.822 21:58:12 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.822 21:58:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:39.822 21:58:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.822 21:58:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.822 ************************************ 00:07:39.822 START TEST bdev_json_nonarray 00:07:39.822 ************************************ 00:07:39.822 21:58:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.080 [2024-12-06 21:58:12.706886] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:40.080 [2024-12-06 21:58:12.707143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:07:40.080 [2024-12-06 21:58:12.870166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.337 [2024-12-06 21:58:12.970872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.337 [2024-12-06 21:58:12.970959] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.337 [2024-12-06 21:58:12.970976] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.337 [2024-12-06 21:58:12.970985] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.337 00:07:40.337 real 0m0.513s 00:07:40.337 user 0m0.307s 00:07:40.337 sys 0m0.102s 00:07:40.337 ************************************ 00:07:40.337 END TEST bdev_json_nonarray 00:07:40.337 ************************************ 00:07:40.337 21:58:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.337 21:58:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:40.596 21:58:13 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:40.596 ************************************ 00:07:40.596 END TEST blockdev_nvme 00:07:40.596 ************************************ 00:07:40.596 00:07:40.596 real 0m39.864s 00:07:40.596 user 1m0.238s 00:07:40.596 sys 0m5.226s 00:07:40.596 21:58:13 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.596 21:58:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.596 21:58:13 -- spdk/autotest.sh@209 -- # uname -s 00:07:40.596 21:58:13 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:40.596 21:58:13 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.596 21:58:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.596 21:58:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.596 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.596 ************************************ 00:07:40.596 START TEST blockdev_nvme_gpt 00:07:40.596 ************************************ 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.596 * Looking for test storage... 00:07:40.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.596 21:58:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.596 --rc genhtml_branch_coverage=1 00:07:40.596 --rc genhtml_function_coverage=1 00:07:40.596 --rc genhtml_legend=1 00:07:40.596 --rc geninfo_all_blocks=1 00:07:40.596 --rc geninfo_unexecuted_blocks=1 00:07:40.596 00:07:40.596 ' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.596 --rc genhtml_branch_coverage=1 00:07:40.596 --rc genhtml_function_coverage=1 00:07:40.596 --rc genhtml_legend=1 00:07:40.596 --rc geninfo_all_blocks=1 00:07:40.596 --rc geninfo_unexecuted_blocks=1 00:07:40.596 00:07:40.596 ' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.596 --rc genhtml_branch_coverage=1 00:07:40.596 --rc genhtml_function_coverage=1 00:07:40.596 --rc genhtml_legend=1 00:07:40.596 --rc geninfo_all_blocks=1 00:07:40.596 --rc geninfo_unexecuted_blocks=1 00:07:40.596 00:07:40.596 ' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.596 --rc genhtml_branch_coverage=1 00:07:40.596 --rc genhtml_function_coverage=1 00:07:40.596 --rc genhtml_legend=1 00:07:40.596 --rc geninfo_all_blocks=1 00:07:40.596 --rc geninfo_unexecuted_blocks=1 00:07:40.596 00:07:40.596 ' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:40.596 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60791 00:07:40.597 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:40.597 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:40.597 21:58:13 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60791 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60791 ']' 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.597 21:58:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.855 [2024-12-06 21:58:13.513947] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:40.855 [2024-12-06 21:58:13.514218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60791 ] 00:07:40.855 [2024-12-06 21:58:13.673064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.112 [2024-12-06 21:58:13.774471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.677 21:58:14 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.677 21:58:14 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:41.677 21:58:14 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:41.677 21:58:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:41.677 21:58:14 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:41.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.192 Waiting for block devices as requested 00:07:42.193 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.193 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.193 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.449 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.766 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:47.766 BYT; 00:07:47.766 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:47.766 BYT; 00:07:47.766 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.766 21:58:20 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.766 21:58:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:48.702 The operation has completed successfully. 00:07:48.702 21:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:49.634 The operation has completed successfully. 00:07:49.634 21:58:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.580 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.580 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.580 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.580 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:50.838 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.838 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.838 [] 00:07:50.838 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.838 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:50.838 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.838 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.096 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.096 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:51.097 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3b8e8a92-4b4b-4ad5-bc93-0f50dcfb0173"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3b8e8a92-4b4b-4ad5-bc93-0f50dcfb0173",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ec9103a6-8620-4a78-9ec5-c1efb423110e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec9103a6-8620-4a78-9ec5-c1efb423110e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2c9fe898-8b4a-4890-904a-9ac777ca7f35"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c9fe898-8b4a-4890-904a-9ac777ca7f35",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1814ff50-2ea7-493b-b9f9-8a401fbec68c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1814ff50-2ea7-493b-b9f9-8a401fbec68c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "73c6849f-917b-4253-acde-54ee19f3c87f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "73c6849f-917b-4253-acde-54ee19f3c87f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:51.097 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:51.354 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:51.354 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:51.354 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:51.354 21:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60791 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60791 ']' 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60791 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60791 00:07:51.354 killing process with pid 60791 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60791' 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60791 00:07:51.354 21:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60791 00:07:52.720 21:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:52.720 21:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:52.720 21:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:52.720 21:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.720 21:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.720 ************************************ 00:07:52.720 START TEST bdev_hello_world 00:07:52.720 ************************************ 00:07:52.720 21:58:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:52.720 [2024-12-06 21:58:25.575987] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:52.720 [2024-12-06 21:58:25.576116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:07:52.977 [2024-12-06 21:58:25.738586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.977 [2024-12-06 21:58:25.838835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.541 [2024-12-06 21:58:26.394154] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:53.541 [2024-12-06 21:58:26.394214] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:53.541 [2024-12-06 21:58:26.394238] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:53.541 [2024-12-06 21:58:26.396633] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:53.541 [2024-12-06 21:58:26.397640] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:53.541 [2024-12-06 21:58:26.397682] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:53.541 [2024-12-06 21:58:26.397959] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:53.541 00:07:53.541 [2024-12-06 21:58:26.397992] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:54.474 00:07:54.474 real 0m1.610s 00:07:54.474 user 0m1.309s 00:07:54.474 sys 0m0.192s 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:54.474 ************************************ 00:07:54.474 END TEST bdev_hello_world 00:07:54.474 ************************************ 00:07:54.474 21:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:54.474 21:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.474 21:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.474 21:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.474 ************************************ 00:07:54.474 START TEST bdev_bounds 00:07:54.474 ************************************ 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61449 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:54.474 Process bdevio pid: 61449 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61449' 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61449 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61449 ']' 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.474 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.475 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.475 21:58:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:54.475 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.475 21:58:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:54.475 [2024-12-06 21:58:27.247000] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:54.475 [2024-12-06 21:58:27.247531] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61449 ] 00:07:54.731 [2024-12-06 21:58:27.411447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.731 [2024-12-06 21:58:27.514761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.731 [2024-12-06 21:58:27.515035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.731 [2024-12-06 21:58:27.515107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.294 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.294 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:55.294 21:58:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:55.553 I/O targets: 00:07:55.553 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:55.553 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:55.553 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:55.553 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.553 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.553 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:55.553 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:55.553 00:07:55.553 00:07:55.553 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.553 http://cunit.sourceforge.net/ 00:07:55.553 00:07:55.553 00:07:55.553 Suite: bdevio tests on: Nvme3n1 00:07:55.553 Test: blockdev write read block ...passed 00:07:55.553 Test: blockdev write zeroes read block ...passed 00:07:55.553 Test: blockdev write zeroes read no split ...passed 00:07:55.553 Test: blockdev write zeroes read split ...passed 00:07:55.553 Test: blockdev write zeroes read split partial ...passed 00:07:55.553 Test: blockdev reset ...[2024-12-06 21:58:28.226022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:55.553 [2024-12-06 21:58:28.228726] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:55.553 passed 00:07:55.553 Test: blockdev write read 8 blocks ...passed 00:07:55.553 Test: blockdev write read size > 128k ...passed 00:07:55.553 Test: blockdev write read invalid size ...passed 00:07:55.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.553 Test: blockdev write read max offset ...passed 00:07:55.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.553 Test: blockdev writev readv 8 blocks ...passed 00:07:55.553 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.553 Test: blockdev writev readv block ...passed 00:07:55.553 Test: blockdev writev readv size > 128k ...passed 00:07:55.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.553 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.237555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5604000 len:0x1000 00:07:55.553 [2024-12-06 21:58:28.237600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev nvme passthru rw ...passed 00:07:55.553 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.553 Test: blockdev nvme admin passthru ...[2024-12-06 21:58:28.238142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.553 [2024-12-06 21:58:28.238164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev copy ...passed 00:07:55.553 Suite: bdevio tests on: Nvme2n3 00:07:55.553 Test: blockdev write read block ...passed 00:07:55.553 Test: blockdev write zeroes read block ...passed 00:07:55.553 Test: blockdev write zeroes read no split ...passed 00:07:55.553 Test: blockdev write zeroes read split ...passed 00:07:55.553 Test: blockdev write zeroes read split partial ...passed 00:07:55.553 Test: blockdev reset ...[2024-12-06 21:58:28.296047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.553 [2024-12-06 21:58:28.301461] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.553 passed 00:07:55.553 Test: blockdev write read 8 blocks ...passed 00:07:55.553 Test: blockdev write read size > 128k ...passed 00:07:55.553 Test: blockdev write read invalid size ...passed 00:07:55.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.553 Test: blockdev write read max offset ...passed 00:07:55.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.553 Test: blockdev writev readv 8 blocks ...passed 00:07:55.553 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.553 Test: blockdev writev readv block ...passed 00:07:55.553 Test: blockdev writev readv size > 128k ...passed 00:07:55.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.553 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.315422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5602000 len:0x1000 00:07:55.553 [2024-12-06 21:58:28.315463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev nvme passthru rw ...passed 00:07:55.553 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:58:28.317360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.553 [2024-12-06 21:58:28.317391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev nvme admin passthru ...passed 00:07:55.553 Test: blockdev copy ...passed 00:07:55.553 Suite: bdevio tests on: Nvme2n2 00:07:55.553 Test: blockdev write read block ...passed 00:07:55.553 Test: blockdev write zeroes read block ...passed 00:07:55.553 Test: blockdev write zeroes read no split ...passed 00:07:55.553 Test: blockdev write zeroes read split ...passed 00:07:55.553 Test: blockdev write zeroes read split partial ...passed 00:07:55.553 Test: blockdev reset ...[2024-12-06 21:58:28.379780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.553 [2024-12-06 21:58:28.384704] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.553 passed 00:07:55.553 Test: blockdev write read 8 blocks ...passed 00:07:55.553 Test: blockdev write read size > 128k ...passed 00:07:55.553 Test: blockdev write read invalid size ...passed 00:07:55.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.553 Test: blockdev write read max offset ...passed 00:07:55.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.553 Test: blockdev writev readv 8 blocks ...passed 00:07:55.553 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.553 Test: blockdev writev readv block ...passed 00:07:55.553 Test: blockdev writev readv size > 128k ...passed 00:07:55.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.553 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.398889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0a38000 len:0x1000 00:07:55.553 [2024-12-06 21:58:28.398928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev nvme passthru rw ...passed 00:07:55.553 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:58:28.400191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.553 [2024-12-06 21:58:28.400220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.553 passed 00:07:55.553 Test: blockdev nvme admin passthru ...passed 00:07:55.553 Test: blockdev copy ...passed 00:07:55.553 Suite: bdevio tests on: Nvme2n1 00:07:55.553 Test: blockdev write read block ...passed 00:07:55.553 Test: blockdev write zeroes read block ...passed 00:07:55.553 Test: blockdev write zeroes read no split ...passed 00:07:55.812 Test: blockdev write zeroes read split ...passed 00:07:55.812 Test: blockdev write zeroes read split partial ...passed 00:07:55.812 Test: blockdev reset ...[2024-12-06 21:58:28.461390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:55.812 [2024-12-06 21:58:28.464911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:55.812 passed 00:07:55.812 Test: blockdev write read 8 blocks ...passed 00:07:55.812 Test: blockdev write read size > 128k ...passed 00:07:55.812 Test: blockdev write read invalid size ...passed 00:07:55.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.812 Test: blockdev write read max offset ...passed 00:07:55.812 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.812 Test: blockdev writev readv 8 blocks ...passed 00:07:55.812 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.812 Test: blockdev writev readv block ...passed 00:07:55.812 Test: blockdev writev readv size > 128k ...passed 00:07:55.812 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.812 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.473157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0a34000 len:0x1000 00:07:55.812 [2024-12-06 21:58:28.473204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.812 passed 00:07:55.812 Test: blockdev nvme passthru rw ...passed 00:07:55.812 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:58:28.474084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:55.812 [2024-12-06 21:58:28.474112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:55.812 passed 00:07:55.812 Test: blockdev nvme admin passthru ...passed 00:07:55.812 Test: blockdev copy ...passed 00:07:55.812 Suite: bdevio tests on: Nvme1n1p2 00:07:55.812 Test: blockdev write read block ...passed 00:07:55.812 Test: blockdev write zeroes read block ...passed 00:07:55.812 Test: blockdev write zeroes read no split ...passed 00:07:55.812 Test: blockdev write zeroes read split ...passed 00:07:55.812 Test: blockdev write zeroes read split partial ...passed 00:07:55.813 Test: blockdev reset ...[2024-12-06 21:58:28.532898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:55.813 [2024-12-06 21:58:28.536757] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:55.813 passed 00:07:55.813 Test: blockdev write read 8 blocks ...passed 00:07:55.813 Test: blockdev write read size > 128k ...passed 00:07:55.813 Test: blockdev write read invalid size ...passed 00:07:55.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.813 Test: blockdev write read max offset ...passed 00:07:55.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.813 Test: blockdev writev readv 8 blocks ...passed 00:07:55.813 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.813 Test: blockdev writev readv block ...passed 00:07:55.813 Test: blockdev writev readv size > 128k ...passed 00:07:55.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.813 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.551791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e0a30000 len:0x1000 00:07:55.813 [2024-12-06 21:58:28.551835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.813 passed 00:07:55.813 Test: blockdev nvme passthru rw ...passed 00:07:55.813 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.813 Test: blockdev nvme admin passthru ...passed 00:07:55.813 Test: blockdev copy ...passed 00:07:55.813 Suite: bdevio tests on: Nvme1n1p1 00:07:55.813 Test: blockdev write read block ...passed 00:07:55.813 Test: blockdev write zeroes read block ...passed 00:07:55.813 Test: blockdev write zeroes read no split ...passed 00:07:55.813 Test: blockdev write zeroes read split ...passed 00:07:55.813 Test: blockdev write zeroes read split partial ...passed 00:07:55.813 Test: blockdev reset ...[2024-12-06 21:58:28.603136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:55.813 [2024-12-06 21:58:28.607079] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:55.813 passed 00:07:55.813 Test: blockdev write read 8 blocks ...passed 00:07:55.813 Test: blockdev write read size > 128k ...passed 00:07:55.813 Test: blockdev write read invalid size ...passed 00:07:55.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.813 Test: blockdev write read max offset ...passed 00:07:55.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.813 Test: blockdev writev readv 8 blocks ...passed 00:07:55.813 Test: blockdev writev readv 30 x 1block ...passed 00:07:55.813 Test: blockdev writev readv block ...passed 00:07:55.813 Test: blockdev writev readv size > 128k ...passed 00:07:55.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:55.813 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.621261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c4c0e000 len:0x1000 00:07:55.813 [2024-12-06 21:58:28.621299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:55.813 passed 00:07:55.813 Test: blockdev nvme passthru rw ...passed 00:07:55.813 Test: blockdev nvme passthru vendor specific ...passed 00:07:55.813 Test: blockdev nvme admin passthru ...passed 00:07:55.813 Test: blockdev copy ...passed 00:07:55.813 Suite: bdevio tests on: Nvme0n1 00:07:55.813 Test: blockdev write read block ...passed 00:07:55.813 Test: blockdev write zeroes read block ...passed 00:07:55.813 Test: blockdev write zeroes read no split ...passed 00:07:55.813 Test: blockdev write zeroes read split ...passed 00:07:55.813 Test: blockdev write zeroes read split partial ...passed 00:07:55.813 Test: blockdev reset ...[2024-12-06 21:58:28.674015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:55.813 [2024-12-06 21:58:28.676726] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:55.813 passed 00:07:55.813 Test: blockdev write read 8 blocks ...passed 00:07:55.813 Test: blockdev write read size > 128k ...passed 00:07:55.813 Test: blockdev write read invalid size ...passed 00:07:55.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:55.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:55.813 Test: blockdev write read max offset ...passed 00:07:55.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:55.813 Test: blockdev writev readv 8 blocks ...passed 00:07:55.813 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.072 Test: blockdev writev readv block ...passed 00:07:56.072 Test: blockdev writev readv size > 128k ...passed 00:07:56.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.072 Test: blockdev comparev and writev ...[2024-12-06 21:58:28.684897] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:56.072 separate metadata which is not supported yet. 00:07:56.072 passed 00:07:56.072 Test: blockdev nvme passthru rw ...passed 00:07:56.072 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.072 Test: blockdev nvme admin passthru ...[2024-12-06 21:58:28.685581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:56.072 [2024-12-06 21:58:28.685630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:56.072 passed 00:07:56.072 Test: blockdev copy ...passed 00:07:56.072 00:07:56.072 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.072 suites 7 7 n/a 0 0 00:07:56.072 tests 161 161 161 0 0 00:07:56.072 asserts 1025 1025 1025 0 n/a 00:07:56.072 00:07:56.072 Elapsed time = 1.331 seconds 00:07:56.072 0 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61449 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61449 ']' 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61449 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61449 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.072 killing process with pid 61449 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61449' 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61449 00:07:56.072 21:58:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61449 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:56.639 00:07:56.639 real 0m2.242s 00:07:56.639 user 0m5.600s 00:07:56.639 sys 0m0.305s 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.639 ************************************ 00:07:56.639 END TEST bdev_bounds 00:07:56.639 ************************************ 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.639 21:58:29 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.639 21:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.639 21:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.639 21:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.639 ************************************ 00:07:56.639 START TEST bdev_nbd 00:07:56.639 ************************************ 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61503 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61503 /var/tmp/spdk-nbd.sock 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61503 ']' 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.639 21:58:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:56.896 [2024-12-06 21:58:29.554613] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:07:56.897 [2024-12-06 21:58:29.554743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.897 [2024-12-06 21:58:29.718743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.155 [2024-12-06 21:58:29.818085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.724 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.984 1+0 records in 00:07:57.984 1+0 records out 00:07:57.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120662 s, 3.4 MB/s 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.984 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.242 1+0 records in 00:07:58.242 1+0 records out 00:07:58.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945336 s, 4.3 MB/s 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.242 21:58:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.242 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.243 1+0 records in 00:07:58.243 1+0 records out 00:07:58.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126146 s, 3.2 MB/s 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.243 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.502 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.502 1+0 records in 00:07:58.502 1+0 records out 00:07:58.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903896 s, 4.5 MB/s 00:07:58.765 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.766 1+0 records in 00:07:58.766 1+0 records out 00:07:58.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895016 s, 4.6 MB/s 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.766 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.023 1+0 records in 00:07:59.023 1+0 records out 00:07:59.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114378 s, 3.6 MB/s 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.023 21:58:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.281 1+0 records in 00:07:59.281 1+0 records out 00:07:59.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100014 s, 4.1 MB/s 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.281 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd0", 00:07:59.539 "bdev_name": "Nvme0n1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd1", 00:07:59.539 "bdev_name": "Nvme1n1p1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd2", 00:07:59.539 "bdev_name": "Nvme1n1p2" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd3", 00:07:59.539 "bdev_name": "Nvme2n1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd4", 00:07:59.539 "bdev_name": "Nvme2n2" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd5", 00:07:59.539 "bdev_name": "Nvme2n3" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd6", 00:07:59.539 "bdev_name": "Nvme3n1" 00:07:59.539 } 00:07:59.539 ]' 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd0", 00:07:59.539 "bdev_name": "Nvme0n1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd1", 00:07:59.539 "bdev_name": "Nvme1n1p1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd2", 00:07:59.539 "bdev_name": "Nvme1n1p2" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd3", 00:07:59.539 "bdev_name": "Nvme2n1" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd4", 00:07:59.539 "bdev_name": "Nvme2n2" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd5", 00:07:59.539 "bdev_name": "Nvme2n3" 00:07:59.539 }, 00:07:59.539 { 00:07:59.539 "nbd_device": "/dev/nbd6", 00:07:59.539 "bdev_name": "Nvme3n1" 00:07:59.539 } 00:07:59.539 ]' 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.539 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.798 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.056 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:00.314 21:58:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.314 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.571 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.830 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.088 21:58:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.346 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:01.604 /dev/nbd0 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.604 1+0 records in 00:08:01.604 1+0 records out 00:08:01.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514133 s, 8.0 MB/s 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.604 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:01.863 /dev/nbd1 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.863 1+0 records in 00:08:01.863 1+0 records out 00:08:01.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011531 s, 3.6 MB/s 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.863 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:02.122 /dev/nbd10 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.122 1+0 records in 00:08:02.122 1+0 records out 00:08:02.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856685 s, 4.8 MB/s 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.122 21:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:02.379 /dev/nbd11 00:08:02.379 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:02.379 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:02.379 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.380 1+0 records in 00:08:02.380 1+0 records out 00:08:02.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617595 s, 6.6 MB/s 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.380 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:02.637 /dev/nbd12 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.637 1+0 records in 00:08:02.637 1+0 records out 00:08:02.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104462 s, 3.9 MB/s 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.637 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:02.894 /dev/nbd13 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.894 1+0 records in 00:08:02.894 1+0 records out 00:08:02.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984575 s, 4.2 MB/s 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.894 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:02.894 /dev/nbd14 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.151 1+0 records in 00:08:03.151 1+0 records out 00:08:03.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00435138 s, 941 kB/s 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.151 21:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.151 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd0", 00:08:03.151 "bdev_name": "Nvme0n1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd1", 00:08:03.151 "bdev_name": "Nvme1n1p1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd10", 00:08:03.151 "bdev_name": "Nvme1n1p2" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd11", 00:08:03.151 "bdev_name": "Nvme2n1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd12", 00:08:03.151 "bdev_name": "Nvme2n2" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd13", 00:08:03.151 "bdev_name": "Nvme2n3" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd14", 00:08:03.151 "bdev_name": "Nvme3n1" 00:08:03.151 } 00:08:03.151 ]' 00:08:03.151 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.151 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd0", 00:08:03.151 "bdev_name": "Nvme0n1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd1", 00:08:03.151 "bdev_name": "Nvme1n1p1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd10", 00:08:03.151 "bdev_name": "Nvme1n1p2" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd11", 00:08:03.151 "bdev_name": "Nvme2n1" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd12", 00:08:03.151 "bdev_name": "Nvme2n2" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd13", 00:08:03.151 "bdev_name": "Nvme2n3" 00:08:03.151 }, 00:08:03.151 { 00:08:03.151 "nbd_device": "/dev/nbd14", 00:08:03.151 "bdev_name": "Nvme3n1" 00:08:03.151 } 00:08:03.151 ]' 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.410 /dev/nbd1 00:08:03.410 /dev/nbd10 00:08:03.410 /dev/nbd11 00:08:03.410 /dev/nbd12 00:08:03.410 /dev/nbd13 00:08:03.410 /dev/nbd14' 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.410 /dev/nbd1 00:08:03.410 /dev/nbd10 00:08:03.410 /dev/nbd11 00:08:03.410 /dev/nbd12 00:08:03.410 /dev/nbd13 00:08:03.410 /dev/nbd14' 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:03.410 256+0 records in 00:08:03.410 256+0 records out 00:08:03.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681786 s, 154 MB/s 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.410 256+0 records in 00:08:03.410 256+0 records out 00:08:03.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18047 s, 5.8 MB/s 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.410 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.698 256+0 records in 00:08:03.698 256+0 records out 00:08:03.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243633 s, 4.3 MB/s 00:08:03.698 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.698 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:03.957 256+0 records in 00:08:03.957 256+0 records out 00:08:03.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190058 s, 5.5 MB/s 00:08:03.957 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.957 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:04.215 256+0 records in 00:08:04.215 256+0 records out 00:08:04.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209824 s, 5.0 MB/s 00:08:04.215 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.216 21:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:04.473 256+0 records in 00:08:04.473 256+0 records out 00:08:04.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24514 s, 4.3 MB/s 00:08:04.473 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.473 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.732 256+0 records in 00:08:04.732 256+0 records out 00:08:04.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253993 s, 4.1 MB/s 00:08:04.733 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.733 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:04.991 256+0 records in 00:08:04.991 256+0 records out 00:08:04.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198734 s, 5.3 MB/s 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.991 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.249 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.249 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.249 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.249 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.250 21:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.250 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.507 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:05.765 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:05.765 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:05.765 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:05.765 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.766 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.024 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.282 21:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.282 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.539 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:06.540 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.797 malloc_lvol_verify 00:08:06.797 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:07.055 41027f2e-3187-4a86-8ddc-0d1cc52b7d7e 00:08:07.055 21:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:07.312 5bd45733-fd28-4a05-a0b2-07c4df65c9fb 00:08:07.312 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:07.570 /dev/nbd0 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:07.570 mke2fs 1.47.0 (5-Feb-2023) 00:08:07.570 Discarding device blocks: 0/4096 done 00:08:07.570 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:07.570 00:08:07.570 Allocating group tables: 0/1 done 00:08:07.570 Writing inode tables: 0/1 done 00:08:07.570 Creating journal (1024 blocks): done 00:08:07.570 Writing superblocks and filesystem accounting information: 0/1 done 00:08:07.570 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.570 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61503 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61503 ']' 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61503 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61503 00:08:07.828 killing process with pid 61503 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61503' 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61503 00:08:07.828 21:58:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61503 00:08:08.763 ************************************ 00:08:08.763 END TEST bdev_nbd 00:08:08.763 ************************************ 00:08:08.763 21:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:08.763 00:08:08.763 real 0m11.811s 00:08:08.763 user 0m16.093s 00:08:08.763 sys 0m3.954s 00:08:08.763 21:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.763 21:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:08.763 skipping fio tests on NVMe due to multi-ns failures. 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:08.763 21:58:41 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.763 21:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:08.763 21:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.763 21:58:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.763 ************************************ 00:08:08.763 START TEST bdev_verify 00:08:08.763 ************************************ 00:08:08.763 21:58:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:08.763 [2024-12-06 21:58:41.423376] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:08.763 [2024-12-06 21:58:41.423491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:08:08.763 [2024-12-06 21:58:41.585772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.020 [2024-12-06 21:58:41.685933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.020 [2024-12-06 21:58:41.686014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.587 Running I/O for 5 seconds... 00:08:11.892 20224.00 IOPS, 79.00 MiB/s [2024-12-06T21:58:45.728Z] 20384.00 IOPS, 79.62 MiB/s [2024-12-06T21:58:46.657Z] 20629.33 IOPS, 80.58 MiB/s [2024-12-06T21:58:47.585Z] 20608.00 IOPS, 80.50 MiB/s [2024-12-06T21:58:47.585Z] 20633.60 IOPS, 80.60 MiB/s 00:08:14.713 Latency(us) 00:08:14.713 [2024-12-06T21:58:47.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.713 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0xbd0bd 00:08:14.713 Nvme0n1 : 5.09 1457.74 5.69 0.00 0.00 87568.47 21273.99 96388.33 00:08:14.713 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:14.713 Nvme0n1 : 5.05 1444.43 5.64 0.00 0.00 88163.92 20064.10 95985.03 00:08:14.713 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x4ff80 00:08:14.713 Nvme1n1p1 : 5.09 1457.33 5.69 0.00 0.00 87404.17 19660.80 87112.47 00:08:14.713 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:14.713 Nvme1n1p1 : 5.08 1447.94 5.66 0.00 0.00 87789.16 12451.84 85902.57 00:08:14.713 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x4ff7f 00:08:14.713 Nvme1n1p2 : 5.10 1456.86 5.69 0.00 0.00 87310.82 19257.50 80256.39 00:08:14.713 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:14.713 Nvme1n1p2 : 5.08 1447.50 5.65 0.00 0.00 87603.01 12804.73 76626.71 00:08:14.713 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x80000 00:08:14.713 Nvme2n1 : 5.10 1455.99 5.69 0.00 0.00 87189.29 20769.87 75416.81 00:08:14.713 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x80000 length 0x80000 00:08:14.713 Nvme2n1 : 5.10 1456.45 5.69 0.00 0.00 87126.75 10284.11 68157.44 00:08:14.713 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x80000 00:08:14.713 Nvme2n2 : 5.10 1454.97 5.68 0.00 0.00 87062.01 23189.66 67754.14 00:08:14.713 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x80000 length 0x80000 00:08:14.713 Nvme2n2 : 5.10 1456.05 5.69 0.00 0.00 86957.86 10435.35 68964.04 00:08:14.713 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x80000 00:08:14.713 Nvme2n3 : 5.11 1453.95 5.68 0.00 0.00 86910.93 22887.19 70577.23 00:08:14.713 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x80000 length 0x80000 00:08:14.713 Nvme2n3 : 5.10 1455.04 5.68 0.00 0.00 86796.63 12199.78 69367.34 00:08:14.713 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x0 length 0x20000 00:08:14.713 Nvme3n1 : 5.11 1452.99 5.68 0.00 0.00 86765.44 20265.75 73803.62 00:08:14.713 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:14.713 Verification LBA range: start 0x20000 length 0x20000 00:08:14.713 Nvme3n1 : 5.11 1454.02 5.68 0.00 0.00 86657.88 14619.57 72593.72 00:08:14.713 [2024-12-06T21:58:47.585Z] =================================================================================================================== 00:08:14.713 [2024-12-06T21:58:47.585Z] Total : 20351.23 79.50 0.00 0.00 87234.45 10284.11 96388.33 00:08:16.087 00:08:16.087 real 0m7.353s 00:08:16.087 user 0m13.743s 00:08:16.087 sys 0m0.214s 00:08:16.087 21:58:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.087 ************************************ 00:08:16.087 END TEST bdev_verify 00:08:16.087 ************************************ 00:08:16.087 21:58:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.087 21:58:48 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.087 21:58:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:16.087 21:58:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.087 21:58:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.087 ************************************ 00:08:16.087 START TEST bdev_verify_big_io 00:08:16.087 ************************************ 00:08:16.087 21:58:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.087 [2024-12-06 21:58:48.848035] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:16.087 [2024-12-06 21:58:48.848165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62022 ] 00:08:16.345 [2024-12-06 21:58:49.008809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.345 [2024-12-06 21:58:49.130483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.345 [2024-12-06 21:58:49.130592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.281 Running I/O for 5 seconds... 00:08:20.570 0.00 IOPS, 0.00 MiB/s [2024-12-06T21:58:55.975Z] 887.00 IOPS, 55.44 MiB/s [2024-12-06T21:58:56.232Z] 1675.33 IOPS, 104.71 MiB/s [2024-12-06T21:58:56.232Z] 2265.50 IOPS, 141.59 MiB/s 00:08:23.360 Latency(us) 00:08:23.360 [2024-12-06T21:58:56.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x0 length 0xbd0b 00:08:23.360 Nvme0n1 : 5.95 91.59 5.72 0.00 0.00 1305670.91 19963.27 1432516.14 00:08:23.360 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:23.360 Nvme0n1 : 5.90 96.34 6.02 0.00 0.00 1266717.33 27222.65 1419610.58 00:08:23.360 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x0 length 0x4ff8 00:08:23.360 Nvme1n1p1 : 5.95 96.40 6.02 0.00 0.00 1231861.80 100421.32 1277649.53 00:08:23.360 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:23.360 Nvme1n1p1 : 5.90 97.64 6.10 0.00 0.00 1207072.30 115343.36 1226027.32 00:08:23.360 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x0 length 0x4ff7 00:08:23.360 Nvme1n1p2 : 6.12 100.32 6.27 0.00 0.00 1139724.08 65334.35 1032444.06 00:08:23.360 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:23.360 Nvme1n1p2 : 6.08 101.49 6.34 0.00 0.00 1121996.91 84289.38 1045349.61 00:08:23.360 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x0 length 0x8000 00:08:23.360 Nvme2n1 : 6.12 100.30 6.27 0.00 0.00 1097928.83 64124.46 1051802.39 00:08:23.360 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x8000 length 0x8000 00:08:23.360 Nvme2n1 : 6.08 103.76 6.49 0.00 0.00 1070270.71 85095.98 1555118.87 00:08:23.360 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.360 Verification LBA range: start 0x0 length 0x8000 00:08:23.360 Nvme2n2 : 6.12 104.52 6.53 0.00 0.00 1027791.64 96791.63 1077613.49 00:08:23.360 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.361 Verification LBA range: start 0x8000 length 0x8000 00:08:23.361 Nvme2n2 : 6.17 108.56 6.78 0.00 0.00 988411.09 87919.06 1245385.65 00:08:23.361 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.361 Verification LBA range: start 0x0 length 0x8000 00:08:23.361 Nvme2n3 : 6.21 113.42 7.09 0.00 0.00 918731.08 29642.44 1096971.82 00:08:23.361 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.361 Verification LBA range: start 0x8000 length 0x8000 00:08:23.361 Nvme2n3 : 6.29 118.30 7.39 0.00 0.00 878110.44 36901.81 1271196.75 00:08:23.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.361 Verification LBA range: start 0x0 length 0x2000 00:08:23.361 Nvme3n1 : 6.31 124.71 7.79 0.00 0.00 809539.53 2243.35 2116510.33 00:08:23.361 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.361 Verification LBA range: start 0x2000 length 0x2000 00:08:23.361 Nvme3n1 : 6.30 123.51 7.72 0.00 0.00 818689.22 3503.66 2051982.57 00:08:23.361 [2024-12-06T21:58:56.233Z] =================================================================================================================== 00:08:23.361 [2024-12-06T21:58:56.233Z] Total : 1480.86 92.55 0.00 0.00 1044929.48 2243.35 2116510.33 00:08:25.260 00:08:25.260 real 0m8.879s 00:08:25.260 user 0m16.772s 00:08:25.260 sys 0m0.254s 00:08:25.260 21:58:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.260 21:58:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:25.260 ************************************ 00:08:25.260 END TEST bdev_verify_big_io 00:08:25.260 ************************************ 00:08:25.260 21:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.260 21:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:25.260 21:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.260 21:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:25.260 ************************************ 00:08:25.260 START TEST bdev_write_zeroes 00:08:25.260 ************************************ 00:08:25.260 21:58:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.260 [2024-12-06 21:58:57.786406] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:25.260 [2024-12-06 21:58:57.786521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62137 ] 00:08:25.260 [2024-12-06 21:58:57.939204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.260 [2024-12-06 21:58:58.040366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.826 Running I/O for 1 seconds... 00:08:27.198 55104.00 IOPS, 215.25 MiB/s 00:08:27.198 Latency(us) 00:08:27.198 [2024-12-06T21:59:00.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.198 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme0n1 : 1.03 7856.35 30.69 0.00 0.00 16232.75 7158.55 33070.47 00:08:27.198 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme1n1p1 : 1.03 7846.72 30.65 0.00 0.00 16232.64 11241.94 32868.82 00:08:27.198 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme1n1p2 : 1.03 7885.70 30.80 0.00 0.00 16044.19 7914.73 26214.40 00:08:27.198 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme2n1 : 1.03 7876.83 30.77 0.00 0.00 16012.90 8318.03 24399.56 00:08:27.198 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme2n2 : 1.03 7867.99 30.73 0.00 0.00 15978.66 8670.92 23794.61 00:08:27.198 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme2n3 : 1.03 7859.02 30.70 0.00 0.00 15941.75 9023.80 23996.26 00:08:27.198 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.198 Nvme3n1 : 1.04 7850.24 30.67 0.00 0.00 15922.25 9376.69 24298.73 00:08:27.198 [2024-12-06T21:59:00.070Z] =================================================================================================================== 00:08:27.198 [2024-12-06T21:59:00.070Z] Total : 55042.85 215.01 0.00 0.00 16051.76 7158.55 33070.47 00:08:27.764 00:08:27.764 real 0m2.689s 00:08:27.764 user 0m2.370s 00:08:27.764 sys 0m0.202s 00:08:27.764 21:59:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.764 21:59:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:27.764 ************************************ 00:08:27.764 END TEST bdev_write_zeroes 00:08:27.764 ************************************ 00:08:27.764 21:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.764 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:27.764 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.764 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.764 ************************************ 00:08:27.764 START TEST bdev_json_nonenclosed 00:08:27.764 ************************************ 00:08:27.764 21:59:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.764 [2024-12-06 21:59:00.513290] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:27.764 [2024-12-06 21:59:00.513686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62190 ] 00:08:28.022 [2024-12-06 21:59:00.674475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.022 [2024-12-06 21:59:00.772190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.022 [2024-12-06 21:59:00.772273] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:28.022 [2024-12-06 21:59:00.772289] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:28.022 [2024-12-06 21:59:00.772298] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.279 00:08:28.279 real 0m0.499s 00:08:28.279 user 0m0.308s 00:08:28.279 sys 0m0.087s 00:08:28.279 21:59:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.279 21:59:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:28.279 ************************************ 00:08:28.279 END TEST bdev_json_nonenclosed 00:08:28.279 ************************************ 00:08:28.279 21:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.279 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:28.279 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.279 21:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:28.279 ************************************ 00:08:28.279 START TEST bdev_json_nonarray 00:08:28.279 ************************************ 00:08:28.279 21:59:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.279 [2024-12-06 21:59:01.058920] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:28.279 [2024-12-06 21:59:01.059037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:08:28.537 [2024-12-06 21:59:01.218304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.537 [2024-12-06 21:59:01.314501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.537 [2024-12-06 21:59:01.314589] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:28.537 [2024-12-06 21:59:01.314607] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:28.537 [2024-12-06 21:59:01.314616] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.795 00:08:28.795 real 0m0.490s 00:08:28.795 user 0m0.304s 00:08:28.795 sys 0m0.082s 00:08:28.795 ************************************ 00:08:28.795 END TEST bdev_json_nonarray 00:08:28.795 ************************************ 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:28.795 21:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:28.795 21:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:28.795 21:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:28.795 21:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.795 21:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.795 21:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:28.795 ************************************ 00:08:28.795 START TEST bdev_gpt_uuid 00:08:28.795 ************************************ 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62241 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62241 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62241 ']' 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.795 21:59:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.795 [2024-12-06 21:59:01.607700] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:28.795 [2024-12-06 21:59:01.607960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62241 ] 00:08:29.053 [2024-12-06 21:59:01.760904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.053 [2024-12-06 21:59:01.856211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.618 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.618 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:29.618 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:29.618 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.618 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:30.184 Some configs were skipped because the RPC state that can call them passed over. 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.184 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:30.184 { 00:08:30.184 "name": "Nvme1n1p1", 00:08:30.184 "aliases": [ 00:08:30.184 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:30.184 ], 00:08:30.184 "product_name": "GPT Disk", 00:08:30.184 "block_size": 4096, 00:08:30.184 "num_blocks": 655104, 00:08:30.184 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:30.184 "assigned_rate_limits": { 00:08:30.184 "rw_ios_per_sec": 0, 00:08:30.184 "rw_mbytes_per_sec": 0, 00:08:30.184 "r_mbytes_per_sec": 0, 00:08:30.184 "w_mbytes_per_sec": 0 00:08:30.184 }, 00:08:30.184 "claimed": false, 00:08:30.184 "zoned": false, 00:08:30.184 "supported_io_types": { 00:08:30.184 "read": true, 00:08:30.184 "write": true, 00:08:30.184 "unmap": true, 00:08:30.184 "flush": true, 00:08:30.184 "reset": true, 00:08:30.184 "nvme_admin": false, 00:08:30.184 "nvme_io": false, 00:08:30.184 "nvme_io_md": false, 00:08:30.184 "write_zeroes": true, 00:08:30.184 "zcopy": false, 00:08:30.184 "get_zone_info": false, 00:08:30.184 "zone_management": false, 00:08:30.184 "zone_append": false, 00:08:30.184 "compare": true, 00:08:30.184 "compare_and_write": false, 00:08:30.184 "abort": true, 00:08:30.185 "seek_hole": false, 00:08:30.185 "seek_data": false, 00:08:30.185 "copy": true, 00:08:30.185 "nvme_iov_md": false 00:08:30.185 }, 00:08:30.185 "driver_specific": { 00:08:30.185 "gpt": { 00:08:30.185 "base_bdev": "Nvme1n1", 00:08:30.185 "offset_blocks": 256, 00:08:30.185 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:30.185 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:30.185 "partition_name": "SPDK_TEST_first" 00:08:30.185 } 00:08:30.185 } 00:08:30.185 } 00:08:30.185 ]' 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:30.185 { 00:08:30.185 "name": "Nvme1n1p2", 00:08:30.185 "aliases": [ 00:08:30.185 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:30.185 ], 00:08:30.185 "product_name": "GPT Disk", 00:08:30.185 "block_size": 4096, 00:08:30.185 "num_blocks": 655103, 00:08:30.185 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:30.185 "assigned_rate_limits": { 00:08:30.185 "rw_ios_per_sec": 0, 00:08:30.185 "rw_mbytes_per_sec": 0, 00:08:30.185 "r_mbytes_per_sec": 0, 00:08:30.185 "w_mbytes_per_sec": 0 00:08:30.185 }, 00:08:30.185 "claimed": false, 00:08:30.185 "zoned": false, 00:08:30.185 "supported_io_types": { 00:08:30.185 "read": true, 00:08:30.185 "write": true, 00:08:30.185 "unmap": true, 00:08:30.185 "flush": true, 00:08:30.185 "reset": true, 00:08:30.185 "nvme_admin": false, 00:08:30.185 "nvme_io": false, 00:08:30.185 "nvme_io_md": false, 00:08:30.185 "write_zeroes": true, 00:08:30.185 "zcopy": false, 00:08:30.185 "get_zone_info": false, 00:08:30.185 "zone_management": false, 00:08:30.185 "zone_append": false, 00:08:30.185 "compare": true, 00:08:30.185 "compare_and_write": false, 00:08:30.185 "abort": true, 00:08:30.185 "seek_hole": false, 00:08:30.185 "seek_data": false, 00:08:30.185 "copy": true, 00:08:30.185 "nvme_iov_md": false 00:08:30.185 }, 00:08:30.185 "driver_specific": { 00:08:30.185 "gpt": { 00:08:30.185 "base_bdev": "Nvme1n1", 00:08:30.185 "offset_blocks": 655360, 00:08:30.185 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:30.185 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:30.185 "partition_name": "SPDK_TEST_second" 00:08:30.185 } 00:08:30.185 } 00:08:30.185 } 00:08:30.185 ]' 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:30.185 21:59:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62241 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62241 ']' 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62241 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62241 00:08:30.185 killing process with pid 62241 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62241' 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62241 00:08:30.185 21:59:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62241 00:08:32.085 00:08:32.085 real 0m2.999s 00:08:32.085 user 0m3.156s 00:08:32.085 sys 0m0.365s 00:08:32.085 21:59:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.085 21:59:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:32.085 ************************************ 00:08:32.085 END TEST bdev_gpt_uuid 00:08:32.085 ************************************ 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:32.085 21:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.441 Waiting for block devices as requested 00:08:32.441 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.441 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.441 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.441 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:37.737 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:37.737 21:59:10 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:37.737 21:59:10 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:37.737 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:37.737 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:37.737 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:37.737 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:37.737 21:59:10 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:37.737 00:08:37.737 real 0m57.300s 00:08:37.737 user 1m12.863s 00:08:37.737 sys 0m8.241s 00:08:37.737 21:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.737 21:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.737 ************************************ 00:08:37.737 END TEST blockdev_nvme_gpt 00:08:37.737 ************************************ 00:08:37.737 21:59:10 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:37.737 21:59:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.737 21:59:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.737 21:59:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.997 ************************************ 00:08:37.997 START TEST nvme 00:08:37.997 ************************************ 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:37.997 * Looking for test storage... 00:08:37.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.997 21:59:10 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.997 21:59:10 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.997 21:59:10 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.997 21:59:10 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.997 21:59:10 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.997 21:59:10 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:37.997 21:59:10 nvme -- scripts/common.sh@345 -- # : 1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.997 21:59:10 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.997 21:59:10 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@353 -- # local d=1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.997 21:59:10 nvme -- scripts/common.sh@355 -- # echo 1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.997 21:59:10 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@353 -- # local d=2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.997 21:59:10 nvme -- scripts/common.sh@355 -- # echo 2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.997 21:59:10 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.997 21:59:10 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.997 21:59:10 nvme -- scripts/common.sh@368 -- # return 0 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.997 --rc genhtml_branch_coverage=1 00:08:37.997 --rc genhtml_function_coverage=1 00:08:37.997 --rc genhtml_legend=1 00:08:37.997 --rc geninfo_all_blocks=1 00:08:37.997 --rc geninfo_unexecuted_blocks=1 00:08:37.997 00:08:37.997 ' 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.997 --rc genhtml_branch_coverage=1 00:08:37.997 --rc genhtml_function_coverage=1 00:08:37.997 --rc genhtml_legend=1 00:08:37.997 --rc geninfo_all_blocks=1 00:08:37.997 --rc geninfo_unexecuted_blocks=1 00:08:37.997 00:08:37.997 ' 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.997 --rc genhtml_branch_coverage=1 00:08:37.997 --rc genhtml_function_coverage=1 00:08:37.997 --rc genhtml_legend=1 00:08:37.997 --rc geninfo_all_blocks=1 00:08:37.997 --rc geninfo_unexecuted_blocks=1 00:08:37.997 00:08:37.997 ' 00:08:37.997 21:59:10 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.997 --rc genhtml_branch_coverage=1 00:08:37.997 --rc genhtml_function_coverage=1 00:08:37.997 --rc genhtml_legend=1 00:08:37.997 --rc geninfo_all_blocks=1 00:08:37.997 --rc geninfo_unexecuted_blocks=1 00:08:37.997 00:08:37.997 ' 00:08:37.997 21:59:10 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.828 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.828 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.828 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.828 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.088 21:59:11 nvme -- nvme/nvme.sh@79 -- # uname 00:08:39.088 21:59:11 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:39.088 21:59:11 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:39.088 21:59:11 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1075 -- # stubpid=62876 00:08:39.088 Waiting for stub to ready for secondary processes... 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62876 ]] 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:39.088 21:59:11 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:39.088 [2024-12-06 21:59:11.758091] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:08:39.088 [2024-12-06 21:59:11.758223] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:39.657 [2024-12-06 21:59:12.512986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.918 [2024-12-06 21:59:12.607232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.918 [2024-12-06 21:59:12.607412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.918 [2024-12-06 21:59:12.607485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.918 [2024-12-06 21:59:12.620867] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:39.918 [2024-12-06 21:59:12.620912] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.918 [2024-12-06 21:59:12.630975] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:39.918 [2024-12-06 21:59:12.631071] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:39.918 [2024-12-06 21:59:12.632693] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.918 [2024-12-06 21:59:12.632898] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:39.918 [2024-12-06 21:59:12.633004] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:39.918 [2024-12-06 21:59:12.634440] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.918 [2024-12-06 21:59:12.634556] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:39.918 [2024-12-06 21:59:12.634594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:39.918 [2024-12-06 21:59:12.636880] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.918 [2024-12-06 21:59:12.637120] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:39.918 [2024-12-06 21:59:12.637226] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:39.918 [2024-12-06 21:59:12.637290] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:39.918 [2024-12-06 21:59:12.637341] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:39.918 done. 00:08:39.918 21:59:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:39.918 21:59:12 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:39.918 21:59:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:39.918 21:59:12 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:39.918 21:59:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.918 21:59:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.918 ************************************ 00:08:39.918 START TEST nvme_reset 00:08:39.918 ************************************ 00:08:39.918 21:59:12 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:40.179 Initializing NVMe Controllers 00:08:40.179 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:40.179 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:40.179 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:40.179 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:40.179 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:40.179 00:08:40.179 real 0m0.287s 00:08:40.179 user 0m0.146s 00:08:40.179 sys 0m0.097s 00:08:40.179 21:59:13 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.179 ************************************ 00:08:40.179 END TEST nvme_reset 00:08:40.179 ************************************ 00:08:40.179 21:59:13 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:40.439 21:59:13 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:40.439 21:59:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.439 21:59:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.439 21:59:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.439 ************************************ 00:08:40.439 START TEST nvme_identify 00:08:40.439 ************************************ 00:08:40.439 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:40.440 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:40.440 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:40.440 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:40.440 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:40.440 21:59:13 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:40.440 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:40.703 [2024-12-06 21:59:13.332424] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62897 terminated unexpected 00:08:40.703 ===================================================== 00:08:40.703 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.703 ===================================================== 00:08:40.703 Controller Capabilities/Features 00:08:40.703 ================================ 00:08:40.703 Vendor ID: 1b36 00:08:40.703 Subsystem Vendor ID: 1af4 00:08:40.703 Serial Number: 12340 00:08:40.703 Model Number: QEMU NVMe Ctrl 00:08:40.703 Firmware Version: 8.0.0 00:08:40.703 Recommended Arb Burst: 6 00:08:40.703 IEEE OUI Identifier: 00 54 52 00:08:40.703 Multi-path I/O 00:08:40.703 May have multiple subsystem ports: No 00:08:40.703 May have multiple controllers: No 00:08:40.703 Associated with SR-IOV VF: No 00:08:40.703 Max Data Transfer Size: 524288 00:08:40.703 Max Number of Namespaces: 256 00:08:40.703 Max Number of I/O Queues: 64 00:08:40.703 NVMe Specification Version (VS): 1.4 00:08:40.703 NVMe Specification Version (Identify): 1.4 00:08:40.703 Maximum Queue Entries: 2048 00:08:40.703 Contiguous Queues Required: Yes 00:08:40.703 Arbitration Mechanisms Supported 00:08:40.703 Weighted Round Robin: Not Supported 00:08:40.703 Vendor Specific: Not Supported 00:08:40.703 Reset Timeout: 7500 ms 00:08:40.703 Doorbell Stride: 4 bytes 00:08:40.703 NVM Subsystem Reset: Not Supported 00:08:40.703 Command Sets Supported 00:08:40.703 NVM Command Set: Supported 00:08:40.703 Boot Partition: Not Supported 00:08:40.703 Memory Page Size Minimum: 4096 bytes 00:08:40.703 Memory Page Size Maximum: 65536 bytes 00:08:40.703 Persistent Memory Region: Not Supported 00:08:40.703 Optional Asynchronous Events Supported 00:08:40.703 Namespace Attribute Notices: Supported 00:08:40.703 Firmware Activation Notices: Not Supported 00:08:40.703 ANA Change Notices: Not Supported 00:08:40.703 PLE Aggregate Log Change Notices: Not Supported 00:08:40.703 LBA Status Info Alert Notices: Not Supported 00:08:40.703 EGE Aggregate Log Change Notices: Not Supported 00:08:40.703 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.703 Zone Descriptor Change Notices: Not Supported 00:08:40.703 Discovery Log Change Notices: Not Supported 00:08:40.703 Controller Attributes 00:08:40.703 128-bit Host Identifier: Not Supported 00:08:40.703 Non-Operational Permissive Mode: Not Supported 00:08:40.703 NVM Sets: Not Supported 00:08:40.703 Read Recovery Levels: Not Supported 00:08:40.703 Endurance Groups: Not Supported 00:08:40.703 Predictable Latency Mode: Not Supported 00:08:40.703 Traffic Based Keep ALive: Not Supported 00:08:40.703 Namespace Granularity: Not Supported 00:08:40.703 SQ Associations: Not Supported 00:08:40.703 UUID List: Not Supported 00:08:40.703 Multi-Domain Subsystem: Not Supported 00:08:40.703 Fixed Capacity Management: Not Supported 00:08:40.703 Variable Capacity Management: Not Supported 00:08:40.703 Delete Endurance Group: Not Supported 00:08:40.703 Delete NVM Set: Not Supported 00:08:40.703 Extended LBA Formats Supported: Supported 00:08:40.703 Flexible Data Placement Supported: Not Supported 00:08:40.703 00:08:40.703 Controller Memory Buffer Support 00:08:40.703 ================================ 00:08:40.703 Supported: No 00:08:40.704 00:08:40.704 Persistent Memory Region Support 00:08:40.704 ================================ 00:08:40.704 Supported: No 00:08:40.704 00:08:40.704 Admin Command Set Attributes 00:08:40.704 ============================ 00:08:40.704 Security Send/Receive: Not Supported 00:08:40.704 Format NVM: Supported 00:08:40.704 Firmware Activate/Download: Not Supported 00:08:40.704 Namespace Management: Supported 00:08:40.704 Device Self-Test: Not Supported 00:08:40.704 Directives: Supported 00:08:40.704 NVMe-MI: Not Supported 00:08:40.704 Virtualization Management: Not Supported 00:08:40.704 Doorbell Buffer Config: Supported 00:08:40.704 Get LBA Status Capability: Not Supported 00:08:40.704 Command & Feature Lockdown Capability: Not Supported 00:08:40.704 Abort Command Limit: 4 00:08:40.704 Async Event Request Limit: 4 00:08:40.704 Number of Firmware Slots: N/A 00:08:40.704 Firmware Slot 1 Read-Only: N/A 00:08:40.704 Firmware Activation Without Reset: N/A 00:08:40.704 Multiple Update Detection Support: N/A 00:08:40.704 Firmware Update Granularity: No Information Provided 00:08:40.704 Per-Namespace SMART Log: Yes 00:08:40.704 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.704 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:40.704 Command Effects Log Page: Supported 00:08:40.704 Get Log Page Extended Data: Supported 00:08:40.704 Telemetry Log Pages: Not Supported 00:08:40.704 Persistent Event Log Pages: Not Supported 00:08:40.704 Supported Log Pages Log Page: May Support 00:08:40.704 Commands Supported & Effects Log Page: Not Supported 00:08:40.704 Feature Identifiers & Effects Log Page:May Support 00:08:40.704 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.704 Data Area 4 for Telemetry Log: Not Supported 00:08:40.704 Error Log Page Entries Supported: 1 00:08:40.704 Keep Alive: Not Supported 00:08:40.704 00:08:40.704 NVM Command Set Attributes 00:08:40.704 ========================== 00:08:40.704 Submission Queue Entry Size 00:08:40.704 Max: 64 00:08:40.704 Min: 64 00:08:40.704 Completion Queue Entry Size 00:08:40.704 Max: 16 00:08:40.704 Min: 16 00:08:40.704 Number of Namespaces: 256 00:08:40.704 Compare Command: Supported 00:08:40.704 Write Uncorrectable Command: Not Supported 00:08:40.704 Dataset Management Command: Supported 00:08:40.704 Write Zeroes Command: Supported 00:08:40.704 Set Features Save Field: Supported 00:08:40.704 Reservations: Not Supported 00:08:40.704 Timestamp: Supported 00:08:40.704 Copy: Supported 00:08:40.704 Volatile Write Cache: Present 00:08:40.704 Atomic Write Unit (Normal): 1 00:08:40.704 Atomic Write Unit (PFail): 1 00:08:40.704 Atomic Compare & Write Unit: 1 00:08:40.704 Fused Compare & Write: Not Supported 00:08:40.704 Scatter-Gather List 00:08:40.704 SGL Command Set: Supported 00:08:40.704 SGL Keyed: Not Supported 00:08:40.704 SGL Bit Bucket Descriptor: Not Supported 00:08:40.704 SGL Metadata Pointer: Not Supported 00:08:40.704 Oversized SGL: Not Supported 00:08:40.704 SGL Metadata Address: Not Supported 00:08:40.704 SGL Offset: Not Supported 00:08:40.704 Transport SGL Data Block: Not Supported 00:08:40.704 Replay Protected Memory Block: Not Supported 00:08:40.704 00:08:40.704 Firmware Slot Information 00:08:40.704 ========================= 00:08:40.704 Active slot: 1 00:08:40.704 Slot 1 Firmware Revision: 1.0 00:08:40.704 00:08:40.704 00:08:40.704 Commands Supported and Effects 00:08:40.704 ============================== 00:08:40.704 Admin Commands 00:08:40.704 -------------- 00:08:40.704 Delete I/O Submission Queue (00h): Supported 00:08:40.704 Create I/O Submission Queue (01h): Supported 00:08:40.704 Get Log Page (02h): Supported 00:08:40.704 Delete I/O Completion Queue (04h): Supported 00:08:40.704 Create I/O Completion Queue (05h): Supported 00:08:40.704 Identify (06h): Supported 00:08:40.704 Abort (08h): Supported 00:08:40.704 Set Features (09h): Supported 00:08:40.704 Get Features (0Ah): Supported 00:08:40.704 Asynchronous Event Request (0Ch): Supported 00:08:40.704 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.704 Directive Send (19h): Supported 00:08:40.704 Directive Receive (1Ah): Supported 00:08:40.704 Virtualization Management (1Ch): Supported 00:08:40.704 Doorbell Buffer Config (7Ch): Supported 00:08:40.704 Format NVM (80h): Supported LBA-Change 00:08:40.704 I/O Commands 00:08:40.704 ------------ 00:08:40.704 Flush (00h): Supported LBA-Change 00:08:40.704 Write (01h): Supported LBA-Change 00:08:40.704 Read (02h): Supported 00:08:40.704 Compare (05h): Supported 00:08:40.704 Write Zeroes (08h): Supported LBA-Change 00:08:40.704 Dataset Management (09h): Supported LBA-Change 00:08:40.704 Unknown (0Ch): Supported 00:08:40.704 Unknown (12h): Supported 00:08:40.704 Copy (19h): Supported LBA-Change 00:08:40.704 Unknown (1Dh): Supported LBA-Change 00:08:40.704 00:08:40.704 Error Log 00:08:40.704 ========= 00:08:40.704 00:08:40.704 Arbitration 00:08:40.704 =========== 00:08:40.704 Arbitration Burst: no limit 00:08:40.704 00:08:40.704 Power Management 00:08:40.704 ================ 00:08:40.704 Number of Power States: 1 00:08:40.704 Current Power State: Power State #0 00:08:40.704 Power State #0: 00:08:40.704 Max Power: 25.00 W 00:08:40.704 Non-Operational State: Operational 00:08:40.704 Entry Latency: 16 microseconds 00:08:40.704 Exit Latency: 4 microseconds 00:08:40.704 Relative Read Throughput: 0 00:08:40.704 Relative Read Latency: 0 00:08:40.704 Relative Write Throughput: 0 00:08:40.704 Relative Write Latency: 0 00:08:40.704 Idle Power[2024-12-06 21:59:13.334128] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62897 terminated unexpected 00:08:40.704 : Not Reported 00:08:40.704 Active Power: Not Reported 00:08:40.704 Non-Operational Permissive Mode: Not Supported 00:08:40.704 00:08:40.704 Health Information 00:08:40.704 ================== 00:08:40.704 Critical Warnings: 00:08:40.704 Available Spare Space: OK 00:08:40.704 Temperature: OK 00:08:40.704 Device Reliability: OK 00:08:40.704 Read Only: No 00:08:40.704 Volatile Memory Backup: OK 00:08:40.704 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.704 Available Spare: 0% 00:08:40.704 Available Spare Threshold: 0% 00:08:40.704 Life Percentage Used: 0% 00:08:40.704 Data Units Read: 640 00:08:40.704 Data Units Written: 568 00:08:40.704 Host Read Commands: 34541 00:08:40.704 Host Write Commands: 34327 00:08:40.704 Controller Busy Time: 0 minutes 00:08:40.704 Power Cycles: 0 00:08:40.704 Power On Hours: 0 hours 00:08:40.704 Unsafe Shutdowns: 0 00:08:40.704 Unrecoverable Media Errors: 0 00:08:40.704 Lifetime Error Log Entries: 0 00:08:40.704 Warning Temperature Time: 0 minutes 00:08:40.704 Critical Temperature Time: 0 minutes 00:08:40.704 00:08:40.704 Number of Queues 00:08:40.704 ================ 00:08:40.704 Number of I/O Submission Queues: 64 00:08:40.704 Number of I/O Completion Queues: 64 00:08:40.704 00:08:40.704 ZNS Specific Controller Data 00:08:40.704 ============================ 00:08:40.704 Zone Append Size Limit: 0 00:08:40.704 00:08:40.704 00:08:40.704 Active Namespaces 00:08:40.704 ================= 00:08:40.704 Namespace ID:1 00:08:40.704 Error Recovery Timeout: Unlimited 00:08:40.704 Command Set Identifier: NVM (00h) 00:08:40.704 Deallocate: Supported 00:08:40.704 Deallocated/Unwritten Error: Supported 00:08:40.704 Deallocated Read Value: All 0x00 00:08:40.704 Deallocate in Write Zeroes: Not Supported 00:08:40.704 Deallocated Guard Field: 0xFFFF 00:08:40.704 Flush: Supported 00:08:40.704 Reservation: Not Supported 00:08:40.704 Metadata Transferred as: Separate Metadata Buffer 00:08:40.704 Namespace Sharing Capabilities: Private 00:08:40.704 Size (in LBAs): 1548666 (5GiB) 00:08:40.704 Capacity (in LBAs): 1548666 (5GiB) 00:08:40.704 Utilization (in LBAs): 1548666 (5GiB) 00:08:40.704 Thin Provisioning: Not Supported 00:08:40.704 Per-NS Atomic Units: No 00:08:40.704 Maximum Single Source Range Length: 128 00:08:40.704 Maximum Copy Length: 128 00:08:40.704 Maximum Source Range Count: 128 00:08:40.704 NGUID/EUI64 Never Reused: No 00:08:40.704 Namespace Write Protected: No 00:08:40.704 Number of LBA Formats: 8 00:08:40.704 Current LBA Format: LBA Format #07 00:08:40.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.704 00:08:40.705 NVM Specific Namespace Data 00:08:40.705 =========================== 00:08:40.705 Logical Block Storage Tag Mask: 0 00:08:40.705 Protection Information Capabilities: 00:08:40.705 16b Guard Protection Information Storage Tag Support: No 00:08:40.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.705 Storage Tag Check Read Support: No 00:08:40.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.705 ===================================================== 00:08:40.705 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.705 ===================================================== 00:08:40.705 Controller Capabilities/Features 00:08:40.705 ================================ 00:08:40.705 Vendor ID: 1b36 00:08:40.705 Subsystem Vendor ID: 1af4 00:08:40.705 Serial Number: 12341 00:08:40.705 Model Number: QEMU NVMe Ctrl 00:08:40.705 Firmware Version: 8.0.0 00:08:40.705 Recommended Arb Burst: 6 00:08:40.705 IEEE OUI Identifier: 00 54 52 00:08:40.705 Multi-path I/O 00:08:40.705 May have multiple subsystem ports: No 00:08:40.705 May have multiple controllers: No 00:08:40.705 Associated with SR-IOV VF: No 00:08:40.705 Max Data Transfer Size: 524288 00:08:40.705 Max Number of Namespaces: 256 00:08:40.705 Max Number of I/O Queues: 64 00:08:40.705 NVMe Specification Version (VS): 1.4 00:08:40.705 NVMe Specification Version (Identify): 1.4 00:08:40.705 Maximum Queue Entries: 2048 00:08:40.705 Contiguous Queues Required: Yes 00:08:40.705 Arbitration Mechanisms Supported 00:08:40.705 Weighted Round Robin: Not Supported 00:08:40.705 Vendor Specific: Not Supported 00:08:40.705 Reset Timeout: 7500 ms 00:08:40.705 Doorbell Stride: 4 bytes 00:08:40.705 NVM Subsystem Reset: Not Supported 00:08:40.705 Command Sets Supported 00:08:40.705 NVM Command Set: Supported 00:08:40.705 Boot Partition: Not Supported 00:08:40.705 Memory Page Size Minimum: 4096 bytes 00:08:40.705 Memory Page Size Maximum: 65536 bytes 00:08:40.705 Persistent Memory Region: Not Supported 00:08:40.705 Optional Asynchronous Events Supported 00:08:40.705 Namespace Attribute Notices: Supported 00:08:40.705 Firmware Activation Notices: Not Supported 00:08:40.705 ANA Change Notices: Not Supported 00:08:40.705 PLE Aggregate Log Change Notices: Not Supported 00:08:40.705 LBA Status Info Alert Notices: Not Supported 00:08:40.705 EGE Aggregate Log Change Notices: Not Supported 00:08:40.705 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.705 Zone Descriptor Change Notices: Not Supported 00:08:40.705 Discovery Log Change Notices: Not Supported 00:08:40.705 Controller Attributes 00:08:40.705 128-bit Host Identifier: Not Supported 00:08:40.705 Non-Operational Permissive Mode: Not Supported 00:08:40.705 NVM Sets: Not Supported 00:08:40.705 Read Recovery Levels: Not Supported 00:08:40.705 Endurance Groups: Not Supported 00:08:40.705 Predictable Latency Mode: Not Supported 00:08:40.705 Traffic Based Keep ALive: Not Supported 00:08:40.705 Namespace Granularity: Not Supported 00:08:40.705 SQ Associations: Not Supported 00:08:40.705 UUID List: Not Supported 00:08:40.705 Multi-Domain Subsystem: Not Supported 00:08:40.705 Fixed Capacity Management: Not Supported 00:08:40.705 Variable Capacity Management: Not Supported 00:08:40.705 Delete Endurance Group: Not Supported 00:08:40.705 Delete NVM Set: Not Supported 00:08:40.705 Extended LBA Formats Supported: Supported 00:08:40.705 Flexible Data Placement Supported: Not Supported 00:08:40.705 00:08:40.705 Controller Memory Buffer Support 00:08:40.705 ================================ 00:08:40.705 Supported: No 00:08:40.705 00:08:40.705 Persistent Memory Region Support 00:08:40.705 ================================ 00:08:40.705 Supported: No 00:08:40.705 00:08:40.705 Admin Command Set Attributes 00:08:40.705 ============================ 00:08:40.705 Security Send/Receive: Not Supported 00:08:40.705 Format NVM: Supported 00:08:40.705 Firmware Activate/Download: Not Supported 00:08:40.705 Namespace Management: Supported 00:08:40.705 Device Self-Test: Not Supported 00:08:40.705 Directives: Supported 00:08:40.705 NVMe-MI: Not Supported 00:08:40.705 Virtualization Management: Not Supported 00:08:40.705 Doorbell Buffer Config: Supported 00:08:40.705 Get LBA Status Capability: Not Supported 00:08:40.705 Command & Feature Lockdown Capability: Not Supported 00:08:40.705 Abort Command Limit: 4 00:08:40.705 Async Event Request Limit: 4 00:08:40.705 Number of Firmware Slots: N/A 00:08:40.705 Firmware Slot 1 Read-Only: N/A 00:08:40.705 Firmware Activation Without Reset: N/A 00:08:40.705 Multiple Update Detection Support: N/A 00:08:40.705 Firmware Update Granularity: No Information Provided 00:08:40.705 Per-Namespace SMART Log: Yes 00:08:40.705 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.705 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:40.705 Command Effects Log Page: Supported 00:08:40.705 Get Log Page Extended Data: Supported 00:08:40.705 Telemetry Log Pages: Not Supported 00:08:40.705 Persistent Event Log Pages: Not Supported 00:08:40.705 Supported Log Pages Log Page: May Support 00:08:40.705 Commands Supported & Effects Log Page: Not Supported 00:08:40.705 Feature Identifiers & Effects Log Page:May Support 00:08:40.705 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.705 Data Area 4 for Telemetry Log: Not Supported 00:08:40.705 Error Log Page Entries Supported: 1 00:08:40.705 Keep Alive: Not Supported 00:08:40.705 00:08:40.705 NVM Command Set Attributes 00:08:40.705 ========================== 00:08:40.705 Submission Queue Entry Size 00:08:40.705 Max: 64 00:08:40.705 Min: 64 00:08:40.705 Completion Queue Entry Size 00:08:40.705 Max: 16 00:08:40.705 Min: 16 00:08:40.705 Number of Namespaces: 256 00:08:40.705 Compare Command: Supported 00:08:40.705 Write Uncorrectable Command: Not Supported 00:08:40.705 Dataset Management Command: Supported 00:08:40.705 Write Zeroes Command: Supported 00:08:40.705 Set Features Save Field: Supported 00:08:40.705 Reservations: Not Supported 00:08:40.705 Timestamp: Supported 00:08:40.705 Copy: Supported 00:08:40.705 Volatile Write Cache: Present 00:08:40.705 Atomic Write Unit (Normal): 1 00:08:40.705 Atomic Write Unit (PFail): 1 00:08:40.705 Atomic Compare & Write Unit: 1 00:08:40.705 Fused Compare & Write: Not Supported 00:08:40.705 Scatter-Gather List 00:08:40.705 SGL Command Set: Supported 00:08:40.705 SGL Keyed: Not Supported 00:08:40.705 SGL Bit Bucket Descriptor: Not Supported 00:08:40.705 SGL Metadata Pointer: Not Supported 00:08:40.705 Oversized SGL: Not Supported 00:08:40.705 SGL Metadata Address: Not Supported 00:08:40.705 SGL Offset: Not Supported 00:08:40.705 Transport SGL Data Block: Not Supported 00:08:40.705 Replay Protected Memory Block: Not Supported 00:08:40.705 00:08:40.705 Firmware Slot Information 00:08:40.705 ========================= 00:08:40.705 Active slot: 1 00:08:40.705 Slot 1 Firmware Revision: 1.0 00:08:40.705 00:08:40.705 00:08:40.705 Commands Supported and Effects 00:08:40.705 ============================== 00:08:40.705 Admin Commands 00:08:40.705 -------------- 00:08:40.705 Delete I/O Submission Queue (00h): Supported 00:08:40.705 Create I/O Submission Queue (01h): Supported 00:08:40.705 Get Log Page (02h): Supported 00:08:40.705 Delete I/O Completion Queue (04h): Supported 00:08:40.705 Create I/O Completion Queue (05h): Supported 00:08:40.705 Identify (06h): Supported 00:08:40.705 Abort (08h): Supported 00:08:40.705 Set Features (09h): Supported 00:08:40.705 Get Features (0Ah): Supported 00:08:40.705 Asynchronous Event Request (0Ch): Supported 00:08:40.705 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.705 Directive Send (19h): Supported 00:08:40.705 Directive Receive (1Ah): Supported 00:08:40.705 Virtualization Management (1Ch): Supported 00:08:40.705 Doorbell Buffer Config (7Ch): Supported 00:08:40.705 Format NVM (80h): Supported LBA-Change 00:08:40.705 I/O Commands 00:08:40.705 ------------ 00:08:40.705 Flush (00h): Supported LBA-Change 00:08:40.705 Write (01h): Supported LBA-Change 00:08:40.705 Read (02h): Supported 00:08:40.705 Compare (05h): Supported 00:08:40.705 Write Zeroes (08h): Supported LBA-Change 00:08:40.705 Dataset Management (09h): Supported LBA-Change 00:08:40.705 Unknown (0Ch): Supported 00:08:40.705 Unknown (12h): Supported 00:08:40.705 Copy (19h): Supported LBA-Change 00:08:40.706 Unknown (1Dh): Supported LBA-Change 00:08:40.706 00:08:40.706 Error Log 00:08:40.706 ========= 00:08:40.706 00:08:40.706 Arbitration 00:08:40.706 =========== 00:08:40.706 Arbitration Burst: no limit 00:08:40.706 00:08:40.706 Power Management 00:08:40.706 ================ 00:08:40.706 Number of Power States: 1 00:08:40.706 Current Power State: Power State #0 00:08:40.706 Power State #0: 00:08:40.706 Max Power: 25.00 W 00:08:40.706 Non-Operational State: Operational 00:08:40.706 Entry Latency: 16 microseconds 00:08:40.706 Exit Latency: 4 microseconds 00:08:40.706 Relative Read Throughput: 0 00:08:40.706 Relative Read Latency: 0 00:08:40.706 Relative Write Throughput: 0 00:08:40.706 Relative Write Latency: 0 00:08:40.706 Idle Power: Not Reported 00:08:40.706 Active Power: Not Reported 00:08:40.706 Non-Operational Permissive Mode: Not Supported 00:08:40.706 00:08:40.706 Health Information 00:08:40.706 ================== 00:08:40.706 Critical Warnings: 00:08:40.706 Available Spare Space: OK 00:08:40.706 Temperature: [2024-12-06 21:59:13.335218] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62897 terminated unexpected 00:08:40.706 OK 00:08:40.706 Device Reliability: OK 00:08:40.706 Read Only: No 00:08:40.706 Volatile Memory Backup: OK 00:08:40.706 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.706 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.706 Available Spare: 0% 00:08:40.706 Available Spare Threshold: 0% 00:08:40.706 Life Percentage Used: 0% 00:08:40.706 Data Units Read: 1004 00:08:40.706 Data Units Written: 871 00:08:40.706 Host Read Commands: 51819 00:08:40.706 Host Write Commands: 50611 00:08:40.706 Controller Busy Time: 0 minutes 00:08:40.706 Power Cycles: 0 00:08:40.706 Power On Hours: 0 hours 00:08:40.706 Unsafe Shutdowns: 0 00:08:40.706 Unrecoverable Media Errors: 0 00:08:40.706 Lifetime Error Log Entries: 0 00:08:40.706 Warning Temperature Time: 0 minutes 00:08:40.706 Critical Temperature Time: 0 minutes 00:08:40.706 00:08:40.706 Number of Queues 00:08:40.706 ================ 00:08:40.706 Number of I/O Submission Queues: 64 00:08:40.706 Number of I/O Completion Queues: 64 00:08:40.706 00:08:40.706 ZNS Specific Controller Data 00:08:40.706 ============================ 00:08:40.706 Zone Append Size Limit: 0 00:08:40.706 00:08:40.706 00:08:40.706 Active Namespaces 00:08:40.706 ================= 00:08:40.706 Namespace ID:1 00:08:40.706 Error Recovery Timeout: Unlimited 00:08:40.706 Command Set Identifier: NVM (00h) 00:08:40.706 Deallocate: Supported 00:08:40.706 Deallocated/Unwritten Error: Supported 00:08:40.706 Deallocated Read Value: All 0x00 00:08:40.706 Deallocate in Write Zeroes: Not Supported 00:08:40.706 Deallocated Guard Field: 0xFFFF 00:08:40.706 Flush: Supported 00:08:40.706 Reservation: Not Supported 00:08:40.706 Namespace Sharing Capabilities: Private 00:08:40.706 Size (in LBAs): 1310720 (5GiB) 00:08:40.706 Capacity (in LBAs): 1310720 (5GiB) 00:08:40.706 Utilization (in LBAs): 1310720 (5GiB) 00:08:40.706 Thin Provisioning: Not Supported 00:08:40.706 Per-NS Atomic Units: No 00:08:40.706 Maximum Single Source Range Length: 128 00:08:40.706 Maximum Copy Length: 128 00:08:40.706 Maximum Source Range Count: 128 00:08:40.706 NGUID/EUI64 Never Reused: No 00:08:40.706 Namespace Write Protected: No 00:08:40.706 Number of LBA Formats: 8 00:08:40.706 Current LBA Format: LBA Format #04 00:08:40.706 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.706 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.706 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.706 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.706 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.706 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.706 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.706 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.706 00:08:40.706 NVM Specific Namespace Data 00:08:40.706 =========================== 00:08:40.706 Logical Block Storage Tag Mask: 0 00:08:40.706 Protection Information Capabilities: 00:08:40.706 16b Guard Protection Information Storage Tag Support: No 00:08:40.706 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.706 Storage Tag Check Read Support: No 00:08:40.706 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.706 ===================================================== 00:08:40.706 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.706 ===================================================== 00:08:40.706 Controller Capabilities/Features 00:08:40.706 ================================ 00:08:40.706 Vendor ID: 1b36 00:08:40.706 Subsystem Vendor ID: 1af4 00:08:40.706 Serial Number: 12343 00:08:40.706 Model Number: QEMU NVMe Ctrl 00:08:40.706 Firmware Version: 8.0.0 00:08:40.706 Recommended Arb Burst: 6 00:08:40.706 IEEE OUI Identifier: 00 54 52 00:08:40.706 Multi-path I/O 00:08:40.706 May have multiple subsystem ports: No 00:08:40.706 May have multiple controllers: Yes 00:08:40.706 Associated with SR-IOV VF: No 00:08:40.706 Max Data Transfer Size: 524288 00:08:40.706 Max Number of Namespaces: 256 00:08:40.706 Max Number of I/O Queues: 64 00:08:40.706 NVMe Specification Version (VS): 1.4 00:08:40.706 NVMe Specification Version (Identify): 1.4 00:08:40.706 Maximum Queue Entries: 2048 00:08:40.706 Contiguous Queues Required: Yes 00:08:40.706 Arbitration Mechanisms Supported 00:08:40.706 Weighted Round Robin: Not Supported 00:08:40.706 Vendor Specific: Not Supported 00:08:40.706 Reset Timeout: 7500 ms 00:08:40.706 Doorbell Stride: 4 bytes 00:08:40.706 NVM Subsystem Reset: Not Supported 00:08:40.706 Command Sets Supported 00:08:40.706 NVM Command Set: Supported 00:08:40.706 Boot Partition: Not Supported 00:08:40.706 Memory Page Size Minimum: 4096 bytes 00:08:40.706 Memory Page Size Maximum: 65536 bytes 00:08:40.706 Persistent Memory Region: Not Supported 00:08:40.706 Optional Asynchronous Events Supported 00:08:40.706 Namespace Attribute Notices: Supported 00:08:40.706 Firmware Activation Notices: Not Supported 00:08:40.706 ANA Change Notices: Not Supported 00:08:40.706 PLE Aggregate Log Change Notices: Not Supported 00:08:40.706 LBA Status Info Alert Notices: Not Supported 00:08:40.706 EGE Aggregate Log Change Notices: Not Supported 00:08:40.706 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.706 Zone Descriptor Change Notices: Not Supported 00:08:40.706 Discovery Log Change Notices: Not Supported 00:08:40.706 Controller Attributes 00:08:40.706 128-bit Host Identifier: Not Supported 00:08:40.706 Non-Operational Permissive Mode: Not Supported 00:08:40.706 NVM Sets: Not Supported 00:08:40.706 Read Recovery Levels: Not Supported 00:08:40.706 Endurance Groups: Supported 00:08:40.706 Predictable Latency Mode: Not Supported 00:08:40.706 Traffic Based Keep ALive: Not Supported 00:08:40.706 Namespace Granularity: Not Supported 00:08:40.706 SQ Associations: Not Supported 00:08:40.706 UUID List: Not Supported 00:08:40.706 Multi-Domain Subsystem: Not Supported 00:08:40.706 Fixed Capacity Management: Not Supported 00:08:40.706 Variable Capacity Management: Not Supported 00:08:40.706 Delete Endurance Group: Not Supported 00:08:40.706 Delete NVM Set: Not Supported 00:08:40.706 Extended LBA Formats Supported: Supported 00:08:40.706 Flexible Data Placement Supported: Supported 00:08:40.706 00:08:40.706 Controller Memory Buffer Support 00:08:40.706 ================================ 00:08:40.706 Supported: No 00:08:40.706 00:08:40.706 Persistent Memory Region Support 00:08:40.706 ================================ 00:08:40.706 Supported: No 00:08:40.706 00:08:40.706 Admin Command Set Attributes 00:08:40.706 ============================ 00:08:40.706 Security Send/Receive: Not Supported 00:08:40.706 Format NVM: Supported 00:08:40.706 Firmware Activate/Download: Not Supported 00:08:40.706 Namespace Management: Supported 00:08:40.706 Device Self-Test: Not Supported 00:08:40.706 Directives: Supported 00:08:40.706 NVMe-MI: Not Supported 00:08:40.706 Virtualization Management: Not Supported 00:08:40.706 Doorbell Buffer Config: Supported 00:08:40.706 Get LBA Status Capability: Not Supported 00:08:40.706 Command & Feature Lockdown Capability: Not Supported 00:08:40.706 Abort Command Limit: 4 00:08:40.706 Async Event Request Limit: 4 00:08:40.707 Number of Firmware Slots: N/A 00:08:40.707 Firmware Slot 1 Read-Only: N/A 00:08:40.707 Firmware Activation Without Reset: N/A 00:08:40.707 Multiple Update Detection Support: N/A 00:08:40.707 Firmware Update Granularity: No Information Provided 00:08:40.707 Per-Namespace SMART Log: Yes 00:08:40.707 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.707 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:40.707 Command Effects Log Page: Supported 00:08:40.707 Get Log Page Extended Data: Supported 00:08:40.707 Telemetry Log Pages: Not Supported 00:08:40.707 Persistent Event Log Pages: Not Supported 00:08:40.707 Supported Log Pages Log Page: May Support 00:08:40.707 Commands Supported & Effects Log Page: Not Supported 00:08:40.707 Feature Identifiers & Effects Log Page:May Support 00:08:40.707 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.707 Data Area 4 for Telemetry Log: Not Supported 00:08:40.707 Error Log Page Entries Supported: 1 00:08:40.707 Keep Alive: Not Supported 00:08:40.707 00:08:40.707 NVM Command Set Attributes 00:08:40.707 ========================== 00:08:40.707 Submission Queue Entry Size 00:08:40.707 Max: 64 00:08:40.707 Min: 64 00:08:40.707 Completion Queue Entry Size 00:08:40.707 Max: 16 00:08:40.707 Min: 16 00:08:40.707 Number of Namespaces: 256 00:08:40.707 Compare Command: Supported 00:08:40.707 Write Uncorrectable Command: Not Supported 00:08:40.707 Dataset Management Command: Supported 00:08:40.707 Write Zeroes Command: Supported 00:08:40.707 Set Features Save Field: Supported 00:08:40.707 Reservations: Not Supported 00:08:40.707 Timestamp: Supported 00:08:40.707 Copy: Supported 00:08:40.707 Volatile Write Cache: Present 00:08:40.707 Atomic Write Unit (Normal): 1 00:08:40.707 Atomic Write Unit (PFail): 1 00:08:40.707 Atomic Compare & Write Unit: 1 00:08:40.707 Fused Compare & Write: Not Supported 00:08:40.707 Scatter-Gather List 00:08:40.707 SGL Command Set: Supported 00:08:40.707 SGL Keyed: Not Supported 00:08:40.707 SGL Bit Bucket Descriptor: Not Supported 00:08:40.707 SGL Metadata Pointer: Not Supported 00:08:40.707 Oversized SGL: Not Supported 00:08:40.707 SGL Metadata Address: Not Supported 00:08:40.707 SGL Offset: Not Supported 00:08:40.707 Transport SGL Data Block: Not Supported 00:08:40.707 Replay Protected Memory Block: Not Supported 00:08:40.707 00:08:40.707 Firmware Slot Information 00:08:40.707 ========================= 00:08:40.707 Active slot: 1 00:08:40.707 Slot 1 Firmware Revision: 1.0 00:08:40.707 00:08:40.707 00:08:40.707 Commands Supported and Effects 00:08:40.707 ============================== 00:08:40.707 Admin Commands 00:08:40.707 -------------- 00:08:40.707 Delete I/O Submission Queue (00h): Supported 00:08:40.707 Create I/O Submission Queue (01h): Supported 00:08:40.707 Get Log Page (02h): Supported 00:08:40.707 Delete I/O Completion Queue (04h): Supported 00:08:40.707 Create I/O Completion Queue (05h): Supported 00:08:40.707 Identify (06h): Supported 00:08:40.707 Abort (08h): Supported 00:08:40.707 Set Features (09h): Supported 00:08:40.707 Get Features (0Ah): Supported 00:08:40.707 Asynchronous Event Request (0Ch): Supported 00:08:40.707 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.707 Directive Send (19h): Supported 00:08:40.707 Directive Receive (1Ah): Supported 00:08:40.707 Virtualization Management (1Ch): Supported 00:08:40.707 Doorbell Buffer Config (7Ch): Supported 00:08:40.707 Format NVM (80h): Supported LBA-Change 00:08:40.707 I/O Commands 00:08:40.707 ------------ 00:08:40.707 Flush (00h): Supported LBA-Change 00:08:40.707 Write (01h): Supported LBA-Change 00:08:40.707 Read (02h): Supported 00:08:40.707 Compare (05h): Supported 00:08:40.707 Write Zeroes (08h): Supported LBA-Change 00:08:40.707 Dataset Management (09h): Supported LBA-Change 00:08:40.707 Unknown (0Ch): Supported 00:08:40.707 Unknown (12h): Supported 00:08:40.707 Copy (19h): Supported LBA-Change 00:08:40.707 Unknown (1Dh): Supported LBA-Change 00:08:40.707 00:08:40.707 Error Log 00:08:40.707 ========= 00:08:40.707 00:08:40.707 Arbitration 00:08:40.707 =========== 00:08:40.707 Arbitration Burst: no limit 00:08:40.707 00:08:40.707 Power Management 00:08:40.707 ================ 00:08:40.707 Number of Power States: 1 00:08:40.707 Current Power State: Power State #0 00:08:40.707 Power State #0: 00:08:40.707 Max Power: 25.00 W 00:08:40.707 Non-Operational State: Operational 00:08:40.707 Entry Latency: 16 microseconds 00:08:40.707 Exit Latency: 4 microseconds 00:08:40.707 Relative Read Throughput: 0 00:08:40.707 Relative Read Latency: 0 00:08:40.707 Relative Write Throughput: 0 00:08:40.707 Relative Write Latency: 0 00:08:40.707 Idle Power: Not Reported 00:08:40.707 Active Power: Not Reported 00:08:40.707 Non-Operational Permissive Mode: Not Supported 00:08:40.707 00:08:40.707 Health Information 00:08:40.707 ================== 00:08:40.707 Critical Warnings: 00:08:40.707 Available Spare Space: OK 00:08:40.707 Temperature: OK 00:08:40.707 Device Reliability: OK 00:08:40.707 Read Only: No 00:08:40.707 Volatile Memory Backup: OK 00:08:40.707 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.707 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.707 Available Spare: 0% 00:08:40.707 Available Spare Threshold: 0% 00:08:40.707 Life Percentage Used: 0% 00:08:40.707 Data Units Read: 762 00:08:40.707 Data Units Written: 691 00:08:40.707 Host Read Commands: 35793 00:08:40.707 Host Write Commands: 35216 00:08:40.707 Controller Busy Time: 0 minutes 00:08:40.707 Power Cycles: 0 00:08:40.707 Power On Hours: 0 hours 00:08:40.707 Unsafe Shutdowns: 0 00:08:40.707 Unrecoverable Media Errors: 0 00:08:40.707 Lifetime Error Log Entries: 0 00:08:40.707 Warning Temperature Time: 0 minutes 00:08:40.707 Critical Temperature Time: 0 minutes 00:08:40.707 00:08:40.707 Number of Queues 00:08:40.707 ================ 00:08:40.707 Number of I/O Submission Queues: 64 00:08:40.707 Number of I/O Completion Queues: 64 00:08:40.707 00:08:40.707 ZNS Specific Controller Data 00:08:40.707 ============================ 00:08:40.707 Zone Append Size Limit: 0 00:08:40.707 00:08:40.707 00:08:40.707 Active Namespaces 00:08:40.707 ================= 00:08:40.707 Namespace ID:1 00:08:40.707 Error Recovery Timeout: Unlimited 00:08:40.707 Command Set Identifier: NVM (00h) 00:08:40.707 Deallocate: Supported 00:08:40.707 Deallocated/Unwritten Error: Supported 00:08:40.707 Deallocated Read Value: All 0x00 00:08:40.707 Deallocate in Write Zeroes: Not Supported 00:08:40.707 Deallocated Guard Field: 0xFFFF 00:08:40.707 Flush: Supported 00:08:40.707 Reservation: Not Supported 00:08:40.707 Namespace Sharing Capabilities: Multiple Controllers 00:08:40.707 Size (in LBAs): 262144 (1GiB) 00:08:40.707 Capacity (in LBAs): 262144 (1GiB) 00:08:40.707 Utilization (in LBAs): 262144 (1GiB) 00:08:40.707 Thin Provisioning: Not Supported 00:08:40.707 Per-NS Atomic Units: No 00:08:40.707 Maximum Single Source Range Length: 128 00:08:40.707 Maximum Copy Length: 128 00:08:40.707 Maximum Source Range Count: 128 00:08:40.707 NGUID/EUI64 Never Reused: No 00:08:40.707 Namespace Write Protected: No 00:08:40.707 Endurance group ID: 1 00:08:40.707 Number of LBA Formats: 8 00:08:40.707 Current LBA Format: LBA Format #04 00:08:40.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.707 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.707 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.707 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.707 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.707 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.707 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.707 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.707 00:08:40.707 Get Feature FDP: 00:08:40.707 ================ 00:08:40.707 Enabled: Yes 00:08:40.707 FDP configuration index: 0 00:08:40.707 00:08:40.707 FDP configurations log page 00:08:40.707 =========================== 00:08:40.707 Number of FDP configurations: 1 00:08:40.707 Version: 0 00:08:40.707 Size: 112 00:08:40.707 FDP Configuration Descriptor: 0 00:08:40.707 Descriptor Size: 96 00:08:40.707 Reclaim Group Identifier format: 2 00:08:40.707 FDP Volatile Write Cache: Not Present 00:08:40.707 FDP Configuration: Valid 00:08:40.707 Vendor Specific Size: 0 00:08:40.707 Number of Reclaim Groups: 2 00:08:40.707 Number of Recalim Unit Handles: 8 00:08:40.707 Max Placement Identifiers: 128 00:08:40.707 Number of Namespaces Suppprted: 256 00:08:40.707 Reclaim unit Nominal Size: 6000000 bytes 00:08:40.707 Estimated Reclaim Unit Time Limit: Not Reported 00:08:40.707 RUH Desc #000: RUH Type: Initially Isolated 00:08:40.707 RUH Desc #001: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #002: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #003: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #004: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #005: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #006: RUH Type: Initially Isolated 00:08:40.708 RUH Desc #007: RUH Type: Initially Isolated 00:08:40.708 00:08:40.708 FDP reclaim unit handle usage log page 00:08:40.708 ====================================== 00:08:40.708 Number of Reclaim Unit Handles: 8 00:08:40.708 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:40.708 RUH Usage Desc #001: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #002: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #003: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #004: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #005: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #006: RUH Attributes: Unused 00:08:40.708 RUH Usage Desc #007: RUH Attributes: Unused 00:08:40.708 00:08:40.708 FDP statistics log page 00:08:40.708 ======================= 00:08:40.708 Host bytes with metadata written: 404856832 00:08:40.708 Medi[2024-12-06 21:59:13.337029] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62897 terminated unexpected 00:08:40.708 a bytes with metadata written: 406540288 00:08:40.708 Media bytes erased: 0 00:08:40.708 00:08:40.708 FDP events log page 00:08:40.708 =================== 00:08:40.708 Number of FDP events: 0 00:08:40.708 00:08:40.708 NVM Specific Namespace Data 00:08:40.708 =========================== 00:08:40.708 Logical Block Storage Tag Mask: 0 00:08:40.708 Protection Information Capabilities: 00:08:40.708 16b Guard Protection Information Storage Tag Support: No 00:08:40.708 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.708 Storage Tag Check Read Support: No 00:08:40.708 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.708 ===================================================== 00:08:40.708 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.708 ===================================================== 00:08:40.708 Controller Capabilities/Features 00:08:40.708 ================================ 00:08:40.708 Vendor ID: 1b36 00:08:40.708 Subsystem Vendor ID: 1af4 00:08:40.708 Serial Number: 12342 00:08:40.708 Model Number: QEMU NVMe Ctrl 00:08:40.708 Firmware Version: 8.0.0 00:08:40.708 Recommended Arb Burst: 6 00:08:40.708 IEEE OUI Identifier: 00 54 52 00:08:40.708 Multi-path I/O 00:08:40.708 May have multiple subsystem ports: No 00:08:40.708 May have multiple controllers: No 00:08:40.708 Associated with SR-IOV VF: No 00:08:40.708 Max Data Transfer Size: 524288 00:08:40.708 Max Number of Namespaces: 256 00:08:40.708 Max Number of I/O Queues: 64 00:08:40.708 NVMe Specification Version (VS): 1.4 00:08:40.708 NVMe Specification Version (Identify): 1.4 00:08:40.708 Maximum Queue Entries: 2048 00:08:40.708 Contiguous Queues Required: Yes 00:08:40.708 Arbitration Mechanisms Supported 00:08:40.708 Weighted Round Robin: Not Supported 00:08:40.708 Vendor Specific: Not Supported 00:08:40.708 Reset Timeout: 7500 ms 00:08:40.708 Doorbell Stride: 4 bytes 00:08:40.708 NVM Subsystem Reset: Not Supported 00:08:40.708 Command Sets Supported 00:08:40.708 NVM Command Set: Supported 00:08:40.708 Boot Partition: Not Supported 00:08:40.708 Memory Page Size Minimum: 4096 bytes 00:08:40.708 Memory Page Size Maximum: 65536 bytes 00:08:40.708 Persistent Memory Region: Not Supported 00:08:40.708 Optional Asynchronous Events Supported 00:08:40.708 Namespace Attribute Notices: Supported 00:08:40.708 Firmware Activation Notices: Not Supported 00:08:40.708 ANA Change Notices: Not Supported 00:08:40.708 PLE Aggregate Log Change Notices: Not Supported 00:08:40.708 LBA Status Info Alert Notices: Not Supported 00:08:40.708 EGE Aggregate Log Change Notices: Not Supported 00:08:40.708 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.708 Zone Descriptor Change Notices: Not Supported 00:08:40.708 Discovery Log Change Notices: Not Supported 00:08:40.708 Controller Attributes 00:08:40.708 128-bit Host Identifier: Not Supported 00:08:40.708 Non-Operational Permissive Mode: Not Supported 00:08:40.708 NVM Sets: Not Supported 00:08:40.708 Read Recovery Levels: Not Supported 00:08:40.708 Endurance Groups: Not Supported 00:08:40.708 Predictable Latency Mode: Not Supported 00:08:40.708 Traffic Based Keep ALive: Not Supported 00:08:40.708 Namespace Granularity: Not Supported 00:08:40.708 SQ Associations: Not Supported 00:08:40.708 UUID List: Not Supported 00:08:40.708 Multi-Domain Subsystem: Not Supported 00:08:40.708 Fixed Capacity Management: Not Supported 00:08:40.708 Variable Capacity Management: Not Supported 00:08:40.708 Delete Endurance Group: Not Supported 00:08:40.708 Delete NVM Set: Not Supported 00:08:40.708 Extended LBA Formats Supported: Supported 00:08:40.708 Flexible Data Placement Supported: Not Supported 00:08:40.708 00:08:40.708 Controller Memory Buffer Support 00:08:40.708 ================================ 00:08:40.708 Supported: No 00:08:40.708 00:08:40.708 Persistent Memory Region Support 00:08:40.708 ================================ 00:08:40.708 Supported: No 00:08:40.708 00:08:40.708 Admin Command Set Attributes 00:08:40.708 ============================ 00:08:40.708 Security Send/Receive: Not Supported 00:08:40.708 Format NVM: Supported 00:08:40.708 Firmware Activate/Download: Not Supported 00:08:40.708 Namespace Management: Supported 00:08:40.708 Device Self-Test: Not Supported 00:08:40.708 Directives: Supported 00:08:40.708 NVMe-MI: Not Supported 00:08:40.708 Virtualization Management: Not Supported 00:08:40.708 Doorbell Buffer Config: Supported 00:08:40.708 Get LBA Status Capability: Not Supported 00:08:40.708 Command & Feature Lockdown Capability: Not Supported 00:08:40.708 Abort Command Limit: 4 00:08:40.708 Async Event Request Limit: 4 00:08:40.708 Number of Firmware Slots: N/A 00:08:40.708 Firmware Slot 1 Read-Only: N/A 00:08:40.708 Firmware Activation Without Reset: N/A 00:08:40.708 Multiple Update Detection Support: N/A 00:08:40.708 Firmware Update Granularity: No Information Provided 00:08:40.708 Per-Namespace SMART Log: Yes 00:08:40.708 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.708 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:40.708 Command Effects Log Page: Supported 00:08:40.708 Get Log Page Extended Data: Supported 00:08:40.708 Telemetry Log Pages: Not Supported 00:08:40.708 Persistent Event Log Pages: Not Supported 00:08:40.708 Supported Log Pages Log Page: May Support 00:08:40.708 Commands Supported & Effects Log Page: Not Supported 00:08:40.708 Feature Identifiers & Effects Log Page:May Support 00:08:40.708 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.708 Data Area 4 for Telemetry Log: Not Supported 00:08:40.708 Error Log Page Entries Supported: 1 00:08:40.708 Keep Alive: Not Supported 00:08:40.708 00:08:40.708 NVM Command Set Attributes 00:08:40.709 ========================== 00:08:40.709 Submission Queue Entry Size 00:08:40.709 Max: 64 00:08:40.709 Min: 64 00:08:40.709 Completion Queue Entry Size 00:08:40.709 Max: 16 00:08:40.709 Min: 16 00:08:40.709 Number of Namespaces: 256 00:08:40.709 Compare Command: Supported 00:08:40.709 Write Uncorrectable Command: Not Supported 00:08:40.709 Dataset Management Command: Supported 00:08:40.709 Write Zeroes Command: Supported 00:08:40.709 Set Features Save Field: Supported 00:08:40.709 Reservations: Not Supported 00:08:40.709 Timestamp: Supported 00:08:40.709 Copy: Supported 00:08:40.709 Volatile Write Cache: Present 00:08:40.709 Atomic Write Unit (Normal): 1 00:08:40.709 Atomic Write Unit (PFail): 1 00:08:40.709 Atomic Compare & Write Unit: 1 00:08:40.709 Fused Compare & Write: Not Supported 00:08:40.709 Scatter-Gather List 00:08:40.709 SGL Command Set: Supported 00:08:40.709 SGL Keyed: Not Supported 00:08:40.709 SGL Bit Bucket Descriptor: Not Supported 00:08:40.709 SGL Metadata Pointer: Not Supported 00:08:40.709 Oversized SGL: Not Supported 00:08:40.709 SGL Metadata Address: Not Supported 00:08:40.709 SGL Offset: Not Supported 00:08:40.709 Transport SGL Data Block: Not Supported 00:08:40.709 Replay Protected Memory Block: Not Supported 00:08:40.709 00:08:40.709 Firmware Slot Information 00:08:40.709 ========================= 00:08:40.709 Active slot: 1 00:08:40.709 Slot 1 Firmware Revision: 1.0 00:08:40.709 00:08:40.709 00:08:40.709 Commands Supported and Effects 00:08:40.709 ============================== 00:08:40.709 Admin Commands 00:08:40.709 -------------- 00:08:40.709 Delete I/O Submission Queue (00h): Supported 00:08:40.709 Create I/O Submission Queue (01h): Supported 00:08:40.709 Get Log Page (02h): Supported 00:08:40.709 Delete I/O Completion Queue (04h): Supported 00:08:40.709 Create I/O Completion Queue (05h): Supported 00:08:40.709 Identify (06h): Supported 00:08:40.709 Abort (08h): Supported 00:08:40.709 Set Features (09h): Supported 00:08:40.709 Get Features (0Ah): Supported 00:08:40.709 Asynchronous Event Request (0Ch): Supported 00:08:40.709 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.709 Directive Send (19h): Supported 00:08:40.709 Directive Receive (1Ah): Supported 00:08:40.709 Virtualization Management (1Ch): Supported 00:08:40.709 Doorbell Buffer Config (7Ch): Supported 00:08:40.709 Format NVM (80h): Supported LBA-Change 00:08:40.709 I/O Commands 00:08:40.709 ------------ 00:08:40.709 Flush (00h): Supported LBA-Change 00:08:40.709 Write (01h): Supported LBA-Change 00:08:40.709 Read (02h): Supported 00:08:40.709 Compare (05h): Supported 00:08:40.709 Write Zeroes (08h): Supported LBA-Change 00:08:40.709 Dataset Management (09h): Supported LBA-Change 00:08:40.709 Unknown (0Ch): Supported 00:08:40.709 Unknown (12h): Supported 00:08:40.709 Copy (19h): Supported LBA-Change 00:08:40.709 Unknown (1Dh): Supported LBA-Change 00:08:40.709 00:08:40.709 Error Log 00:08:40.709 ========= 00:08:40.709 00:08:40.709 Arbitration 00:08:40.709 =========== 00:08:40.709 Arbitration Burst: no limit 00:08:40.709 00:08:40.709 Power Management 00:08:40.709 ================ 00:08:40.709 Number of Power States: 1 00:08:40.709 Current Power State: Power State #0 00:08:40.709 Power State #0: 00:08:40.709 Max Power: 25.00 W 00:08:40.709 Non-Operational State: Operational 00:08:40.709 Entry Latency: 16 microseconds 00:08:40.709 Exit Latency: 4 microseconds 00:08:40.709 Relative Read Throughput: 0 00:08:40.709 Relative Read Latency: 0 00:08:40.709 Relative Write Throughput: 0 00:08:40.709 Relative Write Latency: 0 00:08:40.709 Idle Power: Not Reported 00:08:40.709 Active Power: Not Reported 00:08:40.709 Non-Operational Permissive Mode: Not Supported 00:08:40.709 00:08:40.709 Health Information 00:08:40.709 ================== 00:08:40.709 Critical Warnings: 00:08:40.709 Available Spare Space: OK 00:08:40.709 Temperature: OK 00:08:40.709 Device Reliability: OK 00:08:40.709 Read Only: No 00:08:40.709 Volatile Memory Backup: OK 00:08:40.709 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.709 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.709 Available Spare: 0% 00:08:40.709 Available Spare Threshold: 0% 00:08:40.709 Life Percentage Used: 0% 00:08:40.709 Data Units Read: 2086 00:08:40.709 Data Units Written: 1873 00:08:40.709 Host Read Commands: 105639 00:08:40.709 Host Write Commands: 103908 00:08:40.709 Controller Busy Time: 0 minutes 00:08:40.709 Power Cycles: 0 00:08:40.709 Power On Hours: 0 hours 00:08:40.709 Unsafe Shutdowns: 0 00:08:40.709 Unrecoverable Media Errors: 0 00:08:40.709 Lifetime Error Log Entries: 0 00:08:40.709 Warning Temperature Time: 0 minutes 00:08:40.709 Critical Temperature Time: 0 minutes 00:08:40.709 00:08:40.709 Number of Queues 00:08:40.709 ================ 00:08:40.709 Number of I/O Submission Queues: 64 00:08:40.709 Number of I/O Completion Queues: 64 00:08:40.709 00:08:40.709 ZNS Specific Controller Data 00:08:40.709 ============================ 00:08:40.709 Zone Append Size Limit: 0 00:08:40.709 00:08:40.709 00:08:40.709 Active Namespaces 00:08:40.709 ================= 00:08:40.709 Namespace ID:1 00:08:40.709 Error Recovery Timeout: Unlimited 00:08:40.709 Command Set Identifier: NVM (00h) 00:08:40.709 Deallocate: Supported 00:08:40.709 Deallocated/Unwritten Error: Supported 00:08:40.709 Deallocated Read Value: All 0x00 00:08:40.709 Deallocate in Write Zeroes: Not Supported 00:08:40.709 Deallocated Guard Field: 0xFFFF 00:08:40.709 Flush: Supported 00:08:40.709 Reservation: Not Supported 00:08:40.709 Namespace Sharing Capabilities: Private 00:08:40.709 Size (in LBAs): 1048576 (4GiB) 00:08:40.709 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.709 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.709 Thin Provisioning: Not Supported 00:08:40.709 Per-NS Atomic Units: No 00:08:40.709 Maximum Single Source Range Length: 128 00:08:40.709 Maximum Copy Length: 128 00:08:40.709 Maximum Source Range Count: 128 00:08:40.709 NGUID/EUI64 Never Reused: No 00:08:40.709 Namespace Write Protected: No 00:08:40.709 Number of LBA Formats: 8 00:08:40.709 Current LBA Format: LBA Format #04 00:08:40.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.709 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.709 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.709 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.709 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.709 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.709 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.709 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.709 00:08:40.709 NVM Specific Namespace Data 00:08:40.709 =========================== 00:08:40.709 Logical Block Storage Tag Mask: 0 00:08:40.709 Protection Information Capabilities: 00:08:40.709 16b Guard Protection Information Storage Tag Support: No 00:08:40.709 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.709 Storage Tag Check Read Support: No 00:08:40.709 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.709 Namespace ID:2 00:08:40.709 Error Recovery Timeout: Unlimited 00:08:40.709 Command Set Identifier: NVM (00h) 00:08:40.709 Deallocate: Supported 00:08:40.709 Deallocated/Unwritten Error: Supported 00:08:40.709 Deallocated Read Value: All 0x00 00:08:40.709 Deallocate in Write Zeroes: Not Supported 00:08:40.709 Deallocated Guard Field: 0xFFFF 00:08:40.709 Flush: Supported 00:08:40.709 Reservation: Not Supported 00:08:40.709 Namespace Sharing Capabilities: Private 00:08:40.709 Size (in LBAs): 1048576 (4GiB) 00:08:40.709 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.709 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.709 Thin Provisioning: Not Supported 00:08:40.709 Per-NS Atomic Units: No 00:08:40.709 Maximum Single Source Range Length: 128 00:08:40.709 Maximum Copy Length: 128 00:08:40.709 Maximum Source Range Count: 128 00:08:40.709 NGUID/EUI64 Never Reused: No 00:08:40.709 Namespace Write Protected: No 00:08:40.709 Number of LBA Formats: 8 00:08:40.709 Current LBA Format: LBA Format #04 00:08:40.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.710 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.710 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.710 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.710 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.710 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.710 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.710 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.710 00:08:40.710 NVM Specific Namespace Data 00:08:40.710 =========================== 00:08:40.710 Logical Block Storage Tag Mask: 0 00:08:40.710 Protection Information Capabilities: 00:08:40.710 16b Guard Protection Information Storage Tag Support: No 00:08:40.710 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.710 Storage Tag Check Read Support: No 00:08:40.710 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Namespace ID:3 00:08:40.710 Error Recovery Timeout: Unlimited 00:08:40.710 Command Set Identifier: NVM (00h) 00:08:40.710 Deallocate: Supported 00:08:40.710 Deallocated/Unwritten Error: Supported 00:08:40.710 Deallocated Read Value: All 0x00 00:08:40.710 Deallocate in Write Zeroes: Not Supported 00:08:40.710 Deallocated Guard Field: 0xFFFF 00:08:40.710 Flush: Supported 00:08:40.710 Reservation: Not Supported 00:08:40.710 Namespace Sharing Capabilities: Private 00:08:40.710 Size (in LBAs): 1048576 (4GiB) 00:08:40.710 Capacity (in LBAs): 1048576 (4GiB) 00:08:40.710 Utilization (in LBAs): 1048576 (4GiB) 00:08:40.710 Thin Provisioning: Not Supported 00:08:40.710 Per-NS Atomic Units: No 00:08:40.710 Maximum Single Source Range Length: 128 00:08:40.710 Maximum Copy Length: 128 00:08:40.710 Maximum Source Range Count: 128 00:08:40.710 NGUID/EUI64 Never Reused: No 00:08:40.710 Namespace Write Protected: No 00:08:40.710 Number of LBA Formats: 8 00:08:40.710 Current LBA Format: LBA Format #04 00:08:40.710 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.710 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.710 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.710 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.710 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.710 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.710 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.710 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.710 00:08:40.710 NVM Specific Namespace Data 00:08:40.710 =========================== 00:08:40.710 Logical Block Storage Tag Mask: 0 00:08:40.710 Protection Information Capabilities: 00:08:40.710 16b Guard Protection Information Storage Tag Support: No 00:08:40.710 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.710 Storage Tag Check Read Support: No 00:08:40.710 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.710 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:40.710 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:40.971 ===================================================== 00:08:40.971 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.971 ===================================================== 00:08:40.971 Controller Capabilities/Features 00:08:40.971 ================================ 00:08:40.971 Vendor ID: 1b36 00:08:40.971 Subsystem Vendor ID: 1af4 00:08:40.971 Serial Number: 12340 00:08:40.971 Model Number: QEMU NVMe Ctrl 00:08:40.971 Firmware Version: 8.0.0 00:08:40.971 Recommended Arb Burst: 6 00:08:40.971 IEEE OUI Identifier: 00 54 52 00:08:40.971 Multi-path I/O 00:08:40.971 May have multiple subsystem ports: No 00:08:40.971 May have multiple controllers: No 00:08:40.971 Associated with SR-IOV VF: No 00:08:40.971 Max Data Transfer Size: 524288 00:08:40.971 Max Number of Namespaces: 256 00:08:40.971 Max Number of I/O Queues: 64 00:08:40.971 NVMe Specification Version (VS): 1.4 00:08:40.971 NVMe Specification Version (Identify): 1.4 00:08:40.971 Maximum Queue Entries: 2048 00:08:40.971 Contiguous Queues Required: Yes 00:08:40.971 Arbitration Mechanisms Supported 00:08:40.971 Weighted Round Robin: Not Supported 00:08:40.971 Vendor Specific: Not Supported 00:08:40.971 Reset Timeout: 7500 ms 00:08:40.971 Doorbell Stride: 4 bytes 00:08:40.971 NVM Subsystem Reset: Not Supported 00:08:40.971 Command Sets Supported 00:08:40.971 NVM Command Set: Supported 00:08:40.971 Boot Partition: Not Supported 00:08:40.971 Memory Page Size Minimum: 4096 bytes 00:08:40.971 Memory Page Size Maximum: 65536 bytes 00:08:40.971 Persistent Memory Region: Not Supported 00:08:40.971 Optional Asynchronous Events Supported 00:08:40.971 Namespace Attribute Notices: Supported 00:08:40.972 Firmware Activation Notices: Not Supported 00:08:40.972 ANA Change Notices: Not Supported 00:08:40.972 PLE Aggregate Log Change Notices: Not Supported 00:08:40.972 LBA Status Info Alert Notices: Not Supported 00:08:40.972 EGE Aggregate Log Change Notices: Not Supported 00:08:40.972 Normal NVM Subsystem Shutdown event: Not Supported 00:08:40.972 Zone Descriptor Change Notices: Not Supported 00:08:40.972 Discovery Log Change Notices: Not Supported 00:08:40.972 Controller Attributes 00:08:40.972 128-bit Host Identifier: Not Supported 00:08:40.972 Non-Operational Permissive Mode: Not Supported 00:08:40.972 NVM Sets: Not Supported 00:08:40.972 Read Recovery Levels: Not Supported 00:08:40.972 Endurance Groups: Not Supported 00:08:40.972 Predictable Latency Mode: Not Supported 00:08:40.972 Traffic Based Keep ALive: Not Supported 00:08:40.972 Namespace Granularity: Not Supported 00:08:40.972 SQ Associations: Not Supported 00:08:40.972 UUID List: Not Supported 00:08:40.972 Multi-Domain Subsystem: Not Supported 00:08:40.972 Fixed Capacity Management: Not Supported 00:08:40.972 Variable Capacity Management: Not Supported 00:08:40.972 Delete Endurance Group: Not Supported 00:08:40.972 Delete NVM Set: Not Supported 00:08:40.972 Extended LBA Formats Supported: Supported 00:08:40.972 Flexible Data Placement Supported: Not Supported 00:08:40.972 00:08:40.972 Controller Memory Buffer Support 00:08:40.972 ================================ 00:08:40.972 Supported: No 00:08:40.972 00:08:40.972 Persistent Memory Region Support 00:08:40.972 ================================ 00:08:40.972 Supported: No 00:08:40.972 00:08:40.972 Admin Command Set Attributes 00:08:40.972 ============================ 00:08:40.972 Security Send/Receive: Not Supported 00:08:40.972 Format NVM: Supported 00:08:40.972 Firmware Activate/Download: Not Supported 00:08:40.972 Namespace Management: Supported 00:08:40.972 Device Self-Test: Not Supported 00:08:40.972 Directives: Supported 00:08:40.972 NVMe-MI: Not Supported 00:08:40.972 Virtualization Management: Not Supported 00:08:40.972 Doorbell Buffer Config: Supported 00:08:40.972 Get LBA Status Capability: Not Supported 00:08:40.972 Command & Feature Lockdown Capability: Not Supported 00:08:40.972 Abort Command Limit: 4 00:08:40.972 Async Event Request Limit: 4 00:08:40.972 Number of Firmware Slots: N/A 00:08:40.972 Firmware Slot 1 Read-Only: N/A 00:08:40.972 Firmware Activation Without Reset: N/A 00:08:40.972 Multiple Update Detection Support: N/A 00:08:40.972 Firmware Update Granularity: No Information Provided 00:08:40.972 Per-Namespace SMART Log: Yes 00:08:40.972 Asymmetric Namespace Access Log Page: Not Supported 00:08:40.972 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:40.972 Command Effects Log Page: Supported 00:08:40.972 Get Log Page Extended Data: Supported 00:08:40.972 Telemetry Log Pages: Not Supported 00:08:40.972 Persistent Event Log Pages: Not Supported 00:08:40.972 Supported Log Pages Log Page: May Support 00:08:40.972 Commands Supported & Effects Log Page: Not Supported 00:08:40.972 Feature Identifiers & Effects Log Page:May Support 00:08:40.972 NVMe-MI Commands & Effects Log Page: May Support 00:08:40.972 Data Area 4 for Telemetry Log: Not Supported 00:08:40.972 Error Log Page Entries Supported: 1 00:08:40.972 Keep Alive: Not Supported 00:08:40.972 00:08:40.972 NVM Command Set Attributes 00:08:40.972 ========================== 00:08:40.972 Submission Queue Entry Size 00:08:40.972 Max: 64 00:08:40.972 Min: 64 00:08:40.972 Completion Queue Entry Size 00:08:40.972 Max: 16 00:08:40.972 Min: 16 00:08:40.972 Number of Namespaces: 256 00:08:40.972 Compare Command: Supported 00:08:40.972 Write Uncorrectable Command: Not Supported 00:08:40.972 Dataset Management Command: Supported 00:08:40.972 Write Zeroes Command: Supported 00:08:40.972 Set Features Save Field: Supported 00:08:40.972 Reservations: Not Supported 00:08:40.972 Timestamp: Supported 00:08:40.972 Copy: Supported 00:08:40.972 Volatile Write Cache: Present 00:08:40.972 Atomic Write Unit (Normal): 1 00:08:40.972 Atomic Write Unit (PFail): 1 00:08:40.972 Atomic Compare & Write Unit: 1 00:08:40.972 Fused Compare & Write: Not Supported 00:08:40.972 Scatter-Gather List 00:08:40.972 SGL Command Set: Supported 00:08:40.972 SGL Keyed: Not Supported 00:08:40.972 SGL Bit Bucket Descriptor: Not Supported 00:08:40.972 SGL Metadata Pointer: Not Supported 00:08:40.972 Oversized SGL: Not Supported 00:08:40.972 SGL Metadata Address: Not Supported 00:08:40.972 SGL Offset: Not Supported 00:08:40.972 Transport SGL Data Block: Not Supported 00:08:40.972 Replay Protected Memory Block: Not Supported 00:08:40.972 00:08:40.972 Firmware Slot Information 00:08:40.972 ========================= 00:08:40.972 Active slot: 1 00:08:40.972 Slot 1 Firmware Revision: 1.0 00:08:40.972 00:08:40.972 00:08:40.972 Commands Supported and Effects 00:08:40.972 ============================== 00:08:40.972 Admin Commands 00:08:40.972 -------------- 00:08:40.972 Delete I/O Submission Queue (00h): Supported 00:08:40.972 Create I/O Submission Queue (01h): Supported 00:08:40.972 Get Log Page (02h): Supported 00:08:40.972 Delete I/O Completion Queue (04h): Supported 00:08:40.972 Create I/O Completion Queue (05h): Supported 00:08:40.972 Identify (06h): Supported 00:08:40.972 Abort (08h): Supported 00:08:40.972 Set Features (09h): Supported 00:08:40.972 Get Features (0Ah): Supported 00:08:40.972 Asynchronous Event Request (0Ch): Supported 00:08:40.972 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:40.972 Directive Send (19h): Supported 00:08:40.972 Directive Receive (1Ah): Supported 00:08:40.972 Virtualization Management (1Ch): Supported 00:08:40.972 Doorbell Buffer Config (7Ch): Supported 00:08:40.972 Format NVM (80h): Supported LBA-Change 00:08:40.972 I/O Commands 00:08:40.972 ------------ 00:08:40.972 Flush (00h): Supported LBA-Change 00:08:40.972 Write (01h): Supported LBA-Change 00:08:40.972 Read (02h): Supported 00:08:40.972 Compare (05h): Supported 00:08:40.972 Write Zeroes (08h): Supported LBA-Change 00:08:40.972 Dataset Management (09h): Supported LBA-Change 00:08:40.972 Unknown (0Ch): Supported 00:08:40.972 Unknown (12h): Supported 00:08:40.972 Copy (19h): Supported LBA-Change 00:08:40.972 Unknown (1Dh): Supported LBA-Change 00:08:40.972 00:08:40.972 Error Log 00:08:40.972 ========= 00:08:40.972 00:08:40.972 Arbitration 00:08:40.972 =========== 00:08:40.972 Arbitration Burst: no limit 00:08:40.972 00:08:40.972 Power Management 00:08:40.972 ================ 00:08:40.972 Number of Power States: 1 00:08:40.972 Current Power State: Power State #0 00:08:40.972 Power State #0: 00:08:40.972 Max Power: 25.00 W 00:08:40.972 Non-Operational State: Operational 00:08:40.972 Entry Latency: 16 microseconds 00:08:40.972 Exit Latency: 4 microseconds 00:08:40.973 Relative Read Throughput: 0 00:08:40.973 Relative Read Latency: 0 00:08:40.973 Relative Write Throughput: 0 00:08:40.973 Relative Write Latency: 0 00:08:40.973 Idle Power: Not Reported 00:08:40.973 Active Power: Not Reported 00:08:40.973 Non-Operational Permissive Mode: Not Supported 00:08:40.973 00:08:40.973 Health Information 00:08:40.973 ================== 00:08:40.973 Critical Warnings: 00:08:40.973 Available Spare Space: OK 00:08:40.973 Temperature: OK 00:08:40.973 Device Reliability: OK 00:08:40.973 Read Only: No 00:08:40.973 Volatile Memory Backup: OK 00:08:40.973 Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.973 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:40.973 Available Spare: 0% 00:08:40.973 Available Spare Threshold: 0% 00:08:40.973 Life Percentage Used: 0% 00:08:40.973 Data Units Read: 640 00:08:40.973 Data Units Written: 568 00:08:40.973 Host Read Commands: 34541 00:08:40.973 Host Write Commands: 34327 00:08:40.973 Controller Busy Time: 0 minutes 00:08:40.973 Power Cycles: 0 00:08:40.973 Power On Hours: 0 hours 00:08:40.973 Unsafe Shutdowns: 0 00:08:40.973 Unrecoverable Media Errors: 0 00:08:40.973 Lifetime Error Log Entries: 0 00:08:40.973 Warning Temperature Time: 0 minutes 00:08:40.973 Critical Temperature Time: 0 minutes 00:08:40.973 00:08:40.973 Number of Queues 00:08:40.973 ================ 00:08:40.973 Number of I/O Submission Queues: 64 00:08:40.973 Number of I/O Completion Queues: 64 00:08:40.973 00:08:40.973 ZNS Specific Controller Data 00:08:40.973 ============================ 00:08:40.973 Zone Append Size Limit: 0 00:08:40.973 00:08:40.973 00:08:40.973 Active Namespaces 00:08:40.973 ================= 00:08:40.973 Namespace ID:1 00:08:40.973 Error Recovery Timeout: Unlimited 00:08:40.973 Command Set Identifier: NVM (00h) 00:08:40.973 Deallocate: Supported 00:08:40.973 Deallocated/Unwritten Error: Supported 00:08:40.973 Deallocated Read Value: All 0x00 00:08:40.973 Deallocate in Write Zeroes: Not Supported 00:08:40.973 Deallocated Guard Field: 0xFFFF 00:08:40.973 Flush: Supported 00:08:40.973 Reservation: Not Supported 00:08:40.973 Metadata Transferred as: Separate Metadata Buffer 00:08:40.973 Namespace Sharing Capabilities: Private 00:08:40.973 Size (in LBAs): 1548666 (5GiB) 00:08:40.973 Capacity (in LBAs): 1548666 (5GiB) 00:08:40.973 Utilization (in LBAs): 1548666 (5GiB) 00:08:40.973 Thin Provisioning: Not Supported 00:08:40.973 Per-NS Atomic Units: No 00:08:40.973 Maximum Single Source Range Length: 128 00:08:40.973 Maximum Copy Length: 128 00:08:40.973 Maximum Source Range Count: 128 00:08:40.973 NGUID/EUI64 Never Reused: No 00:08:40.973 Namespace Write Protected: No 00:08:40.973 Number of LBA Formats: 8 00:08:40.973 Current LBA Format: LBA Format #07 00:08:40.973 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:40.973 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:40.973 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:40.973 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:40.973 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:40.973 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:40.973 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:40.973 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:40.973 00:08:40.973 NVM Specific Namespace Data 00:08:40.973 =========================== 00:08:40.973 Logical Block Storage Tag Mask: 0 00:08:40.973 Protection Information Capabilities: 00:08:40.973 16b Guard Protection Information Storage Tag Support: No 00:08:40.973 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:40.973 Storage Tag Check Read Support: No 00:08:40.973 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:40.973 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:40.973 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:41.235 ===================================================== 00:08:41.235 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.235 ===================================================== 00:08:41.235 Controller Capabilities/Features 00:08:41.235 ================================ 00:08:41.235 Vendor ID: 1b36 00:08:41.235 Subsystem Vendor ID: 1af4 00:08:41.235 Serial Number: 12341 00:08:41.235 Model Number: QEMU NVMe Ctrl 00:08:41.235 Firmware Version: 8.0.0 00:08:41.235 Recommended Arb Burst: 6 00:08:41.235 IEEE OUI Identifier: 00 54 52 00:08:41.235 Multi-path I/O 00:08:41.235 May have multiple subsystem ports: No 00:08:41.235 May have multiple controllers: No 00:08:41.235 Associated with SR-IOV VF: No 00:08:41.235 Max Data Transfer Size: 524288 00:08:41.235 Max Number of Namespaces: 256 00:08:41.235 Max Number of I/O Queues: 64 00:08:41.235 NVMe Specification Version (VS): 1.4 00:08:41.235 NVMe Specification Version (Identify): 1.4 00:08:41.235 Maximum Queue Entries: 2048 00:08:41.235 Contiguous Queues Required: Yes 00:08:41.235 Arbitration Mechanisms Supported 00:08:41.235 Weighted Round Robin: Not Supported 00:08:41.235 Vendor Specific: Not Supported 00:08:41.235 Reset Timeout: 7500 ms 00:08:41.235 Doorbell Stride: 4 bytes 00:08:41.235 NVM Subsystem Reset: Not Supported 00:08:41.235 Command Sets Supported 00:08:41.235 NVM Command Set: Supported 00:08:41.235 Boot Partition: Not Supported 00:08:41.235 Memory Page Size Minimum: 4096 bytes 00:08:41.235 Memory Page Size Maximum: 65536 bytes 00:08:41.235 Persistent Memory Region: Not Supported 00:08:41.235 Optional Asynchronous Events Supported 00:08:41.235 Namespace Attribute Notices: Supported 00:08:41.235 Firmware Activation Notices: Not Supported 00:08:41.235 ANA Change Notices: Not Supported 00:08:41.235 PLE Aggregate Log Change Notices: Not Supported 00:08:41.235 LBA Status Info Alert Notices: Not Supported 00:08:41.235 EGE Aggregate Log Change Notices: Not Supported 00:08:41.235 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.235 Zone Descriptor Change Notices: Not Supported 00:08:41.235 Discovery Log Change Notices: Not Supported 00:08:41.235 Controller Attributes 00:08:41.235 128-bit Host Identifier: Not Supported 00:08:41.235 Non-Operational Permissive Mode: Not Supported 00:08:41.235 NVM Sets: Not Supported 00:08:41.235 Read Recovery Levels: Not Supported 00:08:41.235 Endurance Groups: Not Supported 00:08:41.235 Predictable Latency Mode: Not Supported 00:08:41.235 Traffic Based Keep ALive: Not Supported 00:08:41.235 Namespace Granularity: Not Supported 00:08:41.235 SQ Associations: Not Supported 00:08:41.235 UUID List: Not Supported 00:08:41.235 Multi-Domain Subsystem: Not Supported 00:08:41.235 Fixed Capacity Management: Not Supported 00:08:41.235 Variable Capacity Management: Not Supported 00:08:41.235 Delete Endurance Group: Not Supported 00:08:41.235 Delete NVM Set: Not Supported 00:08:41.235 Extended LBA Formats Supported: Supported 00:08:41.235 Flexible Data Placement Supported: Not Supported 00:08:41.235 00:08:41.235 Controller Memory Buffer Support 00:08:41.235 ================================ 00:08:41.235 Supported: No 00:08:41.235 00:08:41.235 Persistent Memory Region Support 00:08:41.235 ================================ 00:08:41.235 Supported: No 00:08:41.235 00:08:41.235 Admin Command Set Attributes 00:08:41.235 ============================ 00:08:41.235 Security Send/Receive: Not Supported 00:08:41.235 Format NVM: Supported 00:08:41.235 Firmware Activate/Download: Not Supported 00:08:41.235 Namespace Management: Supported 00:08:41.235 Device Self-Test: Not Supported 00:08:41.235 Directives: Supported 00:08:41.235 NVMe-MI: Not Supported 00:08:41.235 Virtualization Management: Not Supported 00:08:41.235 Doorbell Buffer Config: Supported 00:08:41.235 Get LBA Status Capability: Not Supported 00:08:41.235 Command & Feature Lockdown Capability: Not Supported 00:08:41.235 Abort Command Limit: 4 00:08:41.235 Async Event Request Limit: 4 00:08:41.235 Number of Firmware Slots: N/A 00:08:41.235 Firmware Slot 1 Read-Only: N/A 00:08:41.235 Firmware Activation Without Reset: N/A 00:08:41.235 Multiple Update Detection Support: N/A 00:08:41.235 Firmware Update Granularity: No Information Provided 00:08:41.235 Per-Namespace SMART Log: Yes 00:08:41.235 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.235 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:41.235 Command Effects Log Page: Supported 00:08:41.235 Get Log Page Extended Data: Supported 00:08:41.235 Telemetry Log Pages: Not Supported 00:08:41.235 Persistent Event Log Pages: Not Supported 00:08:41.235 Supported Log Pages Log Page: May Support 00:08:41.235 Commands Supported & Effects Log Page: Not Supported 00:08:41.235 Feature Identifiers & Effects Log Page:May Support 00:08:41.235 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.235 Data Area 4 for Telemetry Log: Not Supported 00:08:41.235 Error Log Page Entries Supported: 1 00:08:41.235 Keep Alive: Not Supported 00:08:41.235 00:08:41.235 NVM Command Set Attributes 00:08:41.235 ========================== 00:08:41.235 Submission Queue Entry Size 00:08:41.235 Max: 64 00:08:41.235 Min: 64 00:08:41.235 Completion Queue Entry Size 00:08:41.235 Max: 16 00:08:41.235 Min: 16 00:08:41.235 Number of Namespaces: 256 00:08:41.235 Compare Command: Supported 00:08:41.235 Write Uncorrectable Command: Not Supported 00:08:41.235 Dataset Management Command: Supported 00:08:41.235 Write Zeroes Command: Supported 00:08:41.235 Set Features Save Field: Supported 00:08:41.235 Reservations: Not Supported 00:08:41.235 Timestamp: Supported 00:08:41.235 Copy: Supported 00:08:41.235 Volatile Write Cache: Present 00:08:41.235 Atomic Write Unit (Normal): 1 00:08:41.235 Atomic Write Unit (PFail): 1 00:08:41.235 Atomic Compare & Write Unit: 1 00:08:41.235 Fused Compare & Write: Not Supported 00:08:41.235 Scatter-Gather List 00:08:41.235 SGL Command Set: Supported 00:08:41.235 SGL Keyed: Not Supported 00:08:41.235 SGL Bit Bucket Descriptor: Not Supported 00:08:41.235 SGL Metadata Pointer: Not Supported 00:08:41.235 Oversized SGL: Not Supported 00:08:41.235 SGL Metadata Address: Not Supported 00:08:41.235 SGL Offset: Not Supported 00:08:41.235 Transport SGL Data Block: Not Supported 00:08:41.235 Replay Protected Memory Block: Not Supported 00:08:41.235 00:08:41.235 Firmware Slot Information 00:08:41.236 ========================= 00:08:41.236 Active slot: 1 00:08:41.236 Slot 1 Firmware Revision: 1.0 00:08:41.236 00:08:41.236 00:08:41.236 Commands Supported and Effects 00:08:41.236 ============================== 00:08:41.236 Admin Commands 00:08:41.236 -------------- 00:08:41.236 Delete I/O Submission Queue (00h): Supported 00:08:41.236 Create I/O Submission Queue (01h): Supported 00:08:41.236 Get Log Page (02h): Supported 00:08:41.236 Delete I/O Completion Queue (04h): Supported 00:08:41.236 Create I/O Completion Queue (05h): Supported 00:08:41.236 Identify (06h): Supported 00:08:41.236 Abort (08h): Supported 00:08:41.236 Set Features (09h): Supported 00:08:41.236 Get Features (0Ah): Supported 00:08:41.236 Asynchronous Event Request (0Ch): Supported 00:08:41.236 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.236 Directive Send (19h): Supported 00:08:41.236 Directive Receive (1Ah): Supported 00:08:41.236 Virtualization Management (1Ch): Supported 00:08:41.236 Doorbell Buffer Config (7Ch): Supported 00:08:41.236 Format NVM (80h): Supported LBA-Change 00:08:41.236 I/O Commands 00:08:41.236 ------------ 00:08:41.236 Flush (00h): Supported LBA-Change 00:08:41.236 Write (01h): Supported LBA-Change 00:08:41.236 Read (02h): Supported 00:08:41.236 Compare (05h): Supported 00:08:41.236 Write Zeroes (08h): Supported LBA-Change 00:08:41.236 Dataset Management (09h): Supported LBA-Change 00:08:41.236 Unknown (0Ch): Supported 00:08:41.236 Unknown (12h): Supported 00:08:41.236 Copy (19h): Supported LBA-Change 00:08:41.236 Unknown (1Dh): Supported LBA-Change 00:08:41.236 00:08:41.236 Error Log 00:08:41.236 ========= 00:08:41.236 00:08:41.236 Arbitration 00:08:41.236 =========== 00:08:41.236 Arbitration Burst: no limit 00:08:41.236 00:08:41.236 Power Management 00:08:41.236 ================ 00:08:41.236 Number of Power States: 1 00:08:41.236 Current Power State: Power State #0 00:08:41.236 Power State #0: 00:08:41.236 Max Power: 25.00 W 00:08:41.236 Non-Operational State: Operational 00:08:41.236 Entry Latency: 16 microseconds 00:08:41.236 Exit Latency: 4 microseconds 00:08:41.236 Relative Read Throughput: 0 00:08:41.236 Relative Read Latency: 0 00:08:41.236 Relative Write Throughput: 0 00:08:41.236 Relative Write Latency: 0 00:08:41.236 Idle Power: Not Reported 00:08:41.236 Active Power: Not Reported 00:08:41.236 Non-Operational Permissive Mode: Not Supported 00:08:41.236 00:08:41.236 Health Information 00:08:41.236 ================== 00:08:41.236 Critical Warnings: 00:08:41.236 Available Spare Space: OK 00:08:41.236 Temperature: OK 00:08:41.236 Device Reliability: OK 00:08:41.236 Read Only: No 00:08:41.236 Volatile Memory Backup: OK 00:08:41.236 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.236 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.236 Available Spare: 0% 00:08:41.236 Available Spare Threshold: 0% 00:08:41.236 Life Percentage Used: 0% 00:08:41.236 Data Units Read: 1004 00:08:41.236 Data Units Written: 871 00:08:41.236 Host Read Commands: 51819 00:08:41.236 Host Write Commands: 50611 00:08:41.236 Controller Busy Time: 0 minutes 00:08:41.236 Power Cycles: 0 00:08:41.236 Power On Hours: 0 hours 00:08:41.236 Unsafe Shutdowns: 0 00:08:41.236 Unrecoverable Media Errors: 0 00:08:41.236 Lifetime Error Log Entries: 0 00:08:41.236 Warning Temperature Time: 0 minutes 00:08:41.236 Critical Temperature Time: 0 minutes 00:08:41.236 00:08:41.236 Number of Queues 00:08:41.236 ================ 00:08:41.236 Number of I/O Submission Queues: 64 00:08:41.236 Number of I/O Completion Queues: 64 00:08:41.236 00:08:41.236 ZNS Specific Controller Data 00:08:41.236 ============================ 00:08:41.236 Zone Append Size Limit: 0 00:08:41.236 00:08:41.236 00:08:41.236 Active Namespaces 00:08:41.236 ================= 00:08:41.236 Namespace ID:1 00:08:41.236 Error Recovery Timeout: Unlimited 00:08:41.236 Command Set Identifier: NVM (00h) 00:08:41.236 Deallocate: Supported 00:08:41.236 Deallocated/Unwritten Error: Supported 00:08:41.236 Deallocated Read Value: All 0x00 00:08:41.236 Deallocate in Write Zeroes: Not Supported 00:08:41.236 Deallocated Guard Field: 0xFFFF 00:08:41.236 Flush: Supported 00:08:41.236 Reservation: Not Supported 00:08:41.236 Namespace Sharing Capabilities: Private 00:08:41.236 Size (in LBAs): 1310720 (5GiB) 00:08:41.236 Capacity (in LBAs): 1310720 (5GiB) 00:08:41.236 Utilization (in LBAs): 1310720 (5GiB) 00:08:41.236 Thin Provisioning: Not Supported 00:08:41.236 Per-NS Atomic Units: No 00:08:41.236 Maximum Single Source Range Length: 128 00:08:41.236 Maximum Copy Length: 128 00:08:41.236 Maximum Source Range Count: 128 00:08:41.236 NGUID/EUI64 Never Reused: No 00:08:41.236 Namespace Write Protected: No 00:08:41.236 Number of LBA Formats: 8 00:08:41.236 Current LBA Format: LBA Format #04 00:08:41.236 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.236 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.236 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.236 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.236 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.236 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.236 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.236 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.236 00:08:41.236 NVM Specific Namespace Data 00:08:41.236 =========================== 00:08:41.236 Logical Block Storage Tag Mask: 0 00:08:41.236 Protection Information Capabilities: 00:08:41.236 16b Guard Protection Information Storage Tag Support: No 00:08:41.236 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.236 Storage Tag Check Read Support: No 00:08:41.236 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.236 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.236 21:59:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:41.499 ===================================================== 00:08:41.499 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.499 ===================================================== 00:08:41.499 Controller Capabilities/Features 00:08:41.499 ================================ 00:08:41.499 Vendor ID: 1b36 00:08:41.499 Subsystem Vendor ID: 1af4 00:08:41.499 Serial Number: 12342 00:08:41.499 Model Number: QEMU NVMe Ctrl 00:08:41.499 Firmware Version: 8.0.0 00:08:41.499 Recommended Arb Burst: 6 00:08:41.499 IEEE OUI Identifier: 00 54 52 00:08:41.499 Multi-path I/O 00:08:41.499 May have multiple subsystem ports: No 00:08:41.499 May have multiple controllers: No 00:08:41.499 Associated with SR-IOV VF: No 00:08:41.499 Max Data Transfer Size: 524288 00:08:41.499 Max Number of Namespaces: 256 00:08:41.499 Max Number of I/O Queues: 64 00:08:41.499 NVMe Specification Version (VS): 1.4 00:08:41.499 NVMe Specification Version (Identify): 1.4 00:08:41.499 Maximum Queue Entries: 2048 00:08:41.499 Contiguous Queues Required: Yes 00:08:41.499 Arbitration Mechanisms Supported 00:08:41.499 Weighted Round Robin: Not Supported 00:08:41.499 Vendor Specific: Not Supported 00:08:41.499 Reset Timeout: 7500 ms 00:08:41.499 Doorbell Stride: 4 bytes 00:08:41.499 NVM Subsystem Reset: Not Supported 00:08:41.499 Command Sets Supported 00:08:41.499 NVM Command Set: Supported 00:08:41.499 Boot Partition: Not Supported 00:08:41.499 Memory Page Size Minimum: 4096 bytes 00:08:41.499 Memory Page Size Maximum: 65536 bytes 00:08:41.499 Persistent Memory Region: Not Supported 00:08:41.499 Optional Asynchronous Events Supported 00:08:41.499 Namespace Attribute Notices: Supported 00:08:41.499 Firmware Activation Notices: Not Supported 00:08:41.499 ANA Change Notices: Not Supported 00:08:41.499 PLE Aggregate Log Change Notices: Not Supported 00:08:41.499 LBA Status Info Alert Notices: Not Supported 00:08:41.499 EGE Aggregate Log Change Notices: Not Supported 00:08:41.499 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.499 Zone Descriptor Change Notices: Not Supported 00:08:41.499 Discovery Log Change Notices: Not Supported 00:08:41.499 Controller Attributes 00:08:41.499 128-bit Host Identifier: Not Supported 00:08:41.499 Non-Operational Permissive Mode: Not Supported 00:08:41.499 NVM Sets: Not Supported 00:08:41.499 Read Recovery Levels: Not Supported 00:08:41.499 Endurance Groups: Not Supported 00:08:41.499 Predictable Latency Mode: Not Supported 00:08:41.499 Traffic Based Keep ALive: Not Supported 00:08:41.499 Namespace Granularity: Not Supported 00:08:41.499 SQ Associations: Not Supported 00:08:41.499 UUID List: Not Supported 00:08:41.499 Multi-Domain Subsystem: Not Supported 00:08:41.499 Fixed Capacity Management: Not Supported 00:08:41.499 Variable Capacity Management: Not Supported 00:08:41.499 Delete Endurance Group: Not Supported 00:08:41.499 Delete NVM Set: Not Supported 00:08:41.499 Extended LBA Formats Supported: Supported 00:08:41.499 Flexible Data Placement Supported: Not Supported 00:08:41.499 00:08:41.499 Controller Memory Buffer Support 00:08:41.499 ================================ 00:08:41.499 Supported: No 00:08:41.499 00:08:41.499 Persistent Memory Region Support 00:08:41.499 ================================ 00:08:41.499 Supported: No 00:08:41.499 00:08:41.499 Admin Command Set Attributes 00:08:41.499 ============================ 00:08:41.499 Security Send/Receive: Not Supported 00:08:41.499 Format NVM: Supported 00:08:41.499 Firmware Activate/Download: Not Supported 00:08:41.499 Namespace Management: Supported 00:08:41.499 Device Self-Test: Not Supported 00:08:41.499 Directives: Supported 00:08:41.499 NVMe-MI: Not Supported 00:08:41.499 Virtualization Management: Not Supported 00:08:41.499 Doorbell Buffer Config: Supported 00:08:41.499 Get LBA Status Capability: Not Supported 00:08:41.499 Command & Feature Lockdown Capability: Not Supported 00:08:41.499 Abort Command Limit: 4 00:08:41.499 Async Event Request Limit: 4 00:08:41.499 Number of Firmware Slots: N/A 00:08:41.499 Firmware Slot 1 Read-Only: N/A 00:08:41.499 Firmware Activation Without Reset: N/A 00:08:41.499 Multiple Update Detection Support: N/A 00:08:41.499 Firmware Update Granularity: No Information Provided 00:08:41.499 Per-Namespace SMART Log: Yes 00:08:41.499 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.499 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:41.499 Command Effects Log Page: Supported 00:08:41.499 Get Log Page Extended Data: Supported 00:08:41.499 Telemetry Log Pages: Not Supported 00:08:41.499 Persistent Event Log Pages: Not Supported 00:08:41.499 Supported Log Pages Log Page: May Support 00:08:41.499 Commands Supported & Effects Log Page: Not Supported 00:08:41.499 Feature Identifiers & Effects Log Page:May Support 00:08:41.499 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.499 Data Area 4 for Telemetry Log: Not Supported 00:08:41.499 Error Log Page Entries Supported: 1 00:08:41.500 Keep Alive: Not Supported 00:08:41.500 00:08:41.500 NVM Command Set Attributes 00:08:41.500 ========================== 00:08:41.500 Submission Queue Entry Size 00:08:41.500 Max: 64 00:08:41.500 Min: 64 00:08:41.500 Completion Queue Entry Size 00:08:41.500 Max: 16 00:08:41.500 Min: 16 00:08:41.500 Number of Namespaces: 256 00:08:41.500 Compare Command: Supported 00:08:41.500 Write Uncorrectable Command: Not Supported 00:08:41.500 Dataset Management Command: Supported 00:08:41.500 Write Zeroes Command: Supported 00:08:41.500 Set Features Save Field: Supported 00:08:41.500 Reservations: Not Supported 00:08:41.500 Timestamp: Supported 00:08:41.500 Copy: Supported 00:08:41.500 Volatile Write Cache: Present 00:08:41.500 Atomic Write Unit (Normal): 1 00:08:41.500 Atomic Write Unit (PFail): 1 00:08:41.500 Atomic Compare & Write Unit: 1 00:08:41.500 Fused Compare & Write: Not Supported 00:08:41.500 Scatter-Gather List 00:08:41.500 SGL Command Set: Supported 00:08:41.500 SGL Keyed: Not Supported 00:08:41.500 SGL Bit Bucket Descriptor: Not Supported 00:08:41.500 SGL Metadata Pointer: Not Supported 00:08:41.500 Oversized SGL: Not Supported 00:08:41.500 SGL Metadata Address: Not Supported 00:08:41.500 SGL Offset: Not Supported 00:08:41.500 Transport SGL Data Block: Not Supported 00:08:41.500 Replay Protected Memory Block: Not Supported 00:08:41.500 00:08:41.500 Firmware Slot Information 00:08:41.500 ========================= 00:08:41.500 Active slot: 1 00:08:41.500 Slot 1 Firmware Revision: 1.0 00:08:41.500 00:08:41.500 00:08:41.500 Commands Supported and Effects 00:08:41.500 ============================== 00:08:41.500 Admin Commands 00:08:41.500 -------------- 00:08:41.500 Delete I/O Submission Queue (00h): Supported 00:08:41.500 Create I/O Submission Queue (01h): Supported 00:08:41.500 Get Log Page (02h): Supported 00:08:41.500 Delete I/O Completion Queue (04h): Supported 00:08:41.500 Create I/O Completion Queue (05h): Supported 00:08:41.500 Identify (06h): Supported 00:08:41.500 Abort (08h): Supported 00:08:41.500 Set Features (09h): Supported 00:08:41.500 Get Features (0Ah): Supported 00:08:41.500 Asynchronous Event Request (0Ch): Supported 00:08:41.500 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.500 Directive Send (19h): Supported 00:08:41.500 Directive Receive (1Ah): Supported 00:08:41.500 Virtualization Management (1Ch): Supported 00:08:41.500 Doorbell Buffer Config (7Ch): Supported 00:08:41.500 Format NVM (80h): Supported LBA-Change 00:08:41.500 I/O Commands 00:08:41.500 ------------ 00:08:41.500 Flush (00h): Supported LBA-Change 00:08:41.500 Write (01h): Supported LBA-Change 00:08:41.500 Read (02h): Supported 00:08:41.500 Compare (05h): Supported 00:08:41.500 Write Zeroes (08h): Supported LBA-Change 00:08:41.500 Dataset Management (09h): Supported LBA-Change 00:08:41.500 Unknown (0Ch): Supported 00:08:41.500 Unknown (12h): Supported 00:08:41.500 Copy (19h): Supported LBA-Change 00:08:41.500 Unknown (1Dh): Supported LBA-Change 00:08:41.500 00:08:41.500 Error Log 00:08:41.500 ========= 00:08:41.500 00:08:41.500 Arbitration 00:08:41.500 =========== 00:08:41.500 Arbitration Burst: no limit 00:08:41.500 00:08:41.500 Power Management 00:08:41.500 ================ 00:08:41.500 Number of Power States: 1 00:08:41.500 Current Power State: Power State #0 00:08:41.500 Power State #0: 00:08:41.500 Max Power: 25.00 W 00:08:41.500 Non-Operational State: Operational 00:08:41.500 Entry Latency: 16 microseconds 00:08:41.500 Exit Latency: 4 microseconds 00:08:41.500 Relative Read Throughput: 0 00:08:41.500 Relative Read Latency: 0 00:08:41.500 Relative Write Throughput: 0 00:08:41.500 Relative Write Latency: 0 00:08:41.500 Idle Power: Not Reported 00:08:41.500 Active Power: Not Reported 00:08:41.500 Non-Operational Permissive Mode: Not Supported 00:08:41.500 00:08:41.500 Health Information 00:08:41.500 ================== 00:08:41.500 Critical Warnings: 00:08:41.500 Available Spare Space: OK 00:08:41.500 Temperature: OK 00:08:41.500 Device Reliability: OK 00:08:41.500 Read Only: No 00:08:41.500 Volatile Memory Backup: OK 00:08:41.500 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.500 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.500 Available Spare: 0% 00:08:41.500 Available Spare Threshold: 0% 00:08:41.500 Life Percentage Used: 0% 00:08:41.500 Data Units Read: 2086 00:08:41.500 Data Units Written: 1873 00:08:41.500 Host Read Commands: 105639 00:08:41.500 Host Write Commands: 103908 00:08:41.500 Controller Busy Time: 0 minutes 00:08:41.500 Power Cycles: 0 00:08:41.500 Power On Hours: 0 hours 00:08:41.500 Unsafe Shutdowns: 0 00:08:41.500 Unrecoverable Media Errors: 0 00:08:41.500 Lifetime Error Log Entries: 0 00:08:41.500 Warning Temperature Time: 0 minutes 00:08:41.500 Critical Temperature Time: 0 minutes 00:08:41.500 00:08:41.500 Number of Queues 00:08:41.500 ================ 00:08:41.500 Number of I/O Submission Queues: 64 00:08:41.500 Number of I/O Completion Queues: 64 00:08:41.500 00:08:41.500 ZNS Specific Controller Data 00:08:41.500 ============================ 00:08:41.500 Zone Append Size Limit: 0 00:08:41.500 00:08:41.500 00:08:41.500 Active Namespaces 00:08:41.500 ================= 00:08:41.500 Namespace ID:1 00:08:41.500 Error Recovery Timeout: Unlimited 00:08:41.500 Command Set Identifier: NVM (00h) 00:08:41.500 Deallocate: Supported 00:08:41.500 Deallocated/Unwritten Error: Supported 00:08:41.500 Deallocated Read Value: All 0x00 00:08:41.500 Deallocate in Write Zeroes: Not Supported 00:08:41.500 Deallocated Guard Field: 0xFFFF 00:08:41.500 Flush: Supported 00:08:41.500 Reservation: Not Supported 00:08:41.500 Namespace Sharing Capabilities: Private 00:08:41.500 Size (in LBAs): 1048576 (4GiB) 00:08:41.500 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.500 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.500 Thin Provisioning: Not Supported 00:08:41.500 Per-NS Atomic Units: No 00:08:41.500 Maximum Single Source Range Length: 128 00:08:41.500 Maximum Copy Length: 128 00:08:41.500 Maximum Source Range Count: 128 00:08:41.500 NGUID/EUI64 Never Reused: No 00:08:41.500 Namespace Write Protected: No 00:08:41.500 Number of LBA Formats: 8 00:08:41.500 Current LBA Format: LBA Format #04 00:08:41.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.500 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.500 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.500 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.500 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.500 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.500 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.500 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.500 00:08:41.500 NVM Specific Namespace Data 00:08:41.500 =========================== 00:08:41.500 Logical Block Storage Tag Mask: 0 00:08:41.500 Protection Information Capabilities: 00:08:41.500 16b Guard Protection Information Storage Tag Support: No 00:08:41.500 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.500 Storage Tag Check Read Support: No 00:08:41.500 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.500 Namespace ID:2 00:08:41.500 Error Recovery Timeout: Unlimited 00:08:41.500 Command Set Identifier: NVM (00h) 00:08:41.500 Deallocate: Supported 00:08:41.500 Deallocated/Unwritten Error: Supported 00:08:41.500 Deallocated Read Value: All 0x00 00:08:41.500 Deallocate in Write Zeroes: Not Supported 00:08:41.500 Deallocated Guard Field: 0xFFFF 00:08:41.500 Flush: Supported 00:08:41.500 Reservation: Not Supported 00:08:41.500 Namespace Sharing Capabilities: Private 00:08:41.500 Size (in LBAs): 1048576 (4GiB) 00:08:41.500 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.500 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.500 Thin Provisioning: Not Supported 00:08:41.500 Per-NS Atomic Units: No 00:08:41.500 Maximum Single Source Range Length: 128 00:08:41.500 Maximum Copy Length: 128 00:08:41.500 Maximum Source Range Count: 128 00:08:41.500 NGUID/EUI64 Never Reused: No 00:08:41.500 Namespace Write Protected: No 00:08:41.500 Number of LBA Formats: 8 00:08:41.501 Current LBA Format: LBA Format #04 00:08:41.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.501 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.501 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.501 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.501 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.501 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.501 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.501 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.501 00:08:41.501 NVM Specific Namespace Data 00:08:41.501 =========================== 00:08:41.501 Logical Block Storage Tag Mask: 0 00:08:41.501 Protection Information Capabilities: 00:08:41.501 16b Guard Protection Information Storage Tag Support: No 00:08:41.501 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.501 Storage Tag Check Read Support: No 00:08:41.501 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Namespace ID:3 00:08:41.501 Error Recovery Timeout: Unlimited 00:08:41.501 Command Set Identifier: NVM (00h) 00:08:41.501 Deallocate: Supported 00:08:41.501 Deallocated/Unwritten Error: Supported 00:08:41.501 Deallocated Read Value: All 0x00 00:08:41.501 Deallocate in Write Zeroes: Not Supported 00:08:41.501 Deallocated Guard Field: 0xFFFF 00:08:41.501 Flush: Supported 00:08:41.501 Reservation: Not Supported 00:08:41.501 Namespace Sharing Capabilities: Private 00:08:41.501 Size (in LBAs): 1048576 (4GiB) 00:08:41.501 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.501 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.501 Thin Provisioning: Not Supported 00:08:41.501 Per-NS Atomic Units: No 00:08:41.501 Maximum Single Source Range Length: 128 00:08:41.501 Maximum Copy Length: 128 00:08:41.501 Maximum Source Range Count: 128 00:08:41.501 NGUID/EUI64 Never Reused: No 00:08:41.501 Namespace Write Protected: No 00:08:41.501 Number of LBA Formats: 8 00:08:41.501 Current LBA Format: LBA Format #04 00:08:41.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.501 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.501 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.501 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.501 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.501 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.501 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.501 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.501 00:08:41.501 NVM Specific Namespace Data 00:08:41.501 =========================== 00:08:41.501 Logical Block Storage Tag Mask: 0 00:08:41.501 Protection Information Capabilities: 00:08:41.501 16b Guard Protection Information Storage Tag Support: No 00:08:41.501 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.501 Storage Tag Check Read Support: No 00:08:41.501 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.501 21:59:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.501 21:59:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:41.763 ===================================================== 00:08:41.763 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.763 ===================================================== 00:08:41.763 Controller Capabilities/Features 00:08:41.763 ================================ 00:08:41.763 Vendor ID: 1b36 00:08:41.763 Subsystem Vendor ID: 1af4 00:08:41.763 Serial Number: 12343 00:08:41.763 Model Number: QEMU NVMe Ctrl 00:08:41.763 Firmware Version: 8.0.0 00:08:41.763 Recommended Arb Burst: 6 00:08:41.763 IEEE OUI Identifier: 00 54 52 00:08:41.763 Multi-path I/O 00:08:41.763 May have multiple subsystem ports: No 00:08:41.763 May have multiple controllers: Yes 00:08:41.763 Associated with SR-IOV VF: No 00:08:41.763 Max Data Transfer Size: 524288 00:08:41.763 Max Number of Namespaces: 256 00:08:41.763 Max Number of I/O Queues: 64 00:08:41.763 NVMe Specification Version (VS): 1.4 00:08:41.763 NVMe Specification Version (Identify): 1.4 00:08:41.763 Maximum Queue Entries: 2048 00:08:41.763 Contiguous Queues Required: Yes 00:08:41.763 Arbitration Mechanisms Supported 00:08:41.763 Weighted Round Robin: Not Supported 00:08:41.763 Vendor Specific: Not Supported 00:08:41.763 Reset Timeout: 7500 ms 00:08:41.763 Doorbell Stride: 4 bytes 00:08:41.763 NVM Subsystem Reset: Not Supported 00:08:41.763 Command Sets Supported 00:08:41.763 NVM Command Set: Supported 00:08:41.763 Boot Partition: Not Supported 00:08:41.763 Memory Page Size Minimum: 4096 bytes 00:08:41.763 Memory Page Size Maximum: 65536 bytes 00:08:41.764 Persistent Memory Region: Not Supported 00:08:41.764 Optional Asynchronous Events Supported 00:08:41.764 Namespace Attribute Notices: Supported 00:08:41.764 Firmware Activation Notices: Not Supported 00:08:41.764 ANA Change Notices: Not Supported 00:08:41.764 PLE Aggregate Log Change Notices: Not Supported 00:08:41.764 LBA Status Info Alert Notices: Not Supported 00:08:41.764 EGE Aggregate Log Change Notices: Not Supported 00:08:41.764 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.764 Zone Descriptor Change Notices: Not Supported 00:08:41.764 Discovery Log Change Notices: Not Supported 00:08:41.764 Controller Attributes 00:08:41.764 128-bit Host Identifier: Not Supported 00:08:41.764 Non-Operational Permissive Mode: Not Supported 00:08:41.764 NVM Sets: Not Supported 00:08:41.764 Read Recovery Levels: Not Supported 00:08:41.764 Endurance Groups: Supported 00:08:41.764 Predictable Latency Mode: Not Supported 00:08:41.764 Traffic Based Keep ALive: Not Supported 00:08:41.764 Namespace Granularity: Not Supported 00:08:41.764 SQ Associations: Not Supported 00:08:41.764 UUID List: Not Supported 00:08:41.764 Multi-Domain Subsystem: Not Supported 00:08:41.764 Fixed Capacity Management: Not Supported 00:08:41.764 Variable Capacity Management: Not Supported 00:08:41.764 Delete Endurance Group: Not Supported 00:08:41.764 Delete NVM Set: Not Supported 00:08:41.764 Extended LBA Formats Supported: Supported 00:08:41.764 Flexible Data Placement Supported: Supported 00:08:41.764 00:08:41.764 Controller Memory Buffer Support 00:08:41.764 ================================ 00:08:41.764 Supported: No 00:08:41.764 00:08:41.764 Persistent Memory Region Support 00:08:41.764 ================================ 00:08:41.764 Supported: No 00:08:41.764 00:08:41.764 Admin Command Set Attributes 00:08:41.764 ============================ 00:08:41.764 Security Send/Receive: Not Supported 00:08:41.764 Format NVM: Supported 00:08:41.764 Firmware Activate/Download: Not Supported 00:08:41.764 Namespace Management: Supported 00:08:41.764 Device Self-Test: Not Supported 00:08:41.764 Directives: Supported 00:08:41.764 NVMe-MI: Not Supported 00:08:41.764 Virtualization Management: Not Supported 00:08:41.764 Doorbell Buffer Config: Supported 00:08:41.764 Get LBA Status Capability: Not Supported 00:08:41.764 Command & Feature Lockdown Capability: Not Supported 00:08:41.764 Abort Command Limit: 4 00:08:41.764 Async Event Request Limit: 4 00:08:41.764 Number of Firmware Slots: N/A 00:08:41.764 Firmware Slot 1 Read-Only: N/A 00:08:41.764 Firmware Activation Without Reset: N/A 00:08:41.764 Multiple Update Detection Support: N/A 00:08:41.764 Firmware Update Granularity: No Information Provided 00:08:41.764 Per-Namespace SMART Log: Yes 00:08:41.764 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.764 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:41.764 Command Effects Log Page: Supported 00:08:41.764 Get Log Page Extended Data: Supported 00:08:41.764 Telemetry Log Pages: Not Supported 00:08:41.764 Persistent Event Log Pages: Not Supported 00:08:41.764 Supported Log Pages Log Page: May Support 00:08:41.764 Commands Supported & Effects Log Page: Not Supported 00:08:41.764 Feature Identifiers & Effects Log Page:May Support 00:08:41.764 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.764 Data Area 4 for Telemetry Log: Not Supported 00:08:41.764 Error Log Page Entries Supported: 1 00:08:41.764 Keep Alive: Not Supported 00:08:41.764 00:08:41.764 NVM Command Set Attributes 00:08:41.764 ========================== 00:08:41.764 Submission Queue Entry Size 00:08:41.764 Max: 64 00:08:41.764 Min: 64 00:08:41.764 Completion Queue Entry Size 00:08:41.764 Max: 16 00:08:41.764 Min: 16 00:08:41.764 Number of Namespaces: 256 00:08:41.764 Compare Command: Supported 00:08:41.764 Write Uncorrectable Command: Not Supported 00:08:41.764 Dataset Management Command: Supported 00:08:41.764 Write Zeroes Command: Supported 00:08:41.764 Set Features Save Field: Supported 00:08:41.764 Reservations: Not Supported 00:08:41.764 Timestamp: Supported 00:08:41.764 Copy: Supported 00:08:41.764 Volatile Write Cache: Present 00:08:41.764 Atomic Write Unit (Normal): 1 00:08:41.764 Atomic Write Unit (PFail): 1 00:08:41.764 Atomic Compare & Write Unit: 1 00:08:41.764 Fused Compare & Write: Not Supported 00:08:41.764 Scatter-Gather List 00:08:41.764 SGL Command Set: Supported 00:08:41.764 SGL Keyed: Not Supported 00:08:41.764 SGL Bit Bucket Descriptor: Not Supported 00:08:41.764 SGL Metadata Pointer: Not Supported 00:08:41.764 Oversized SGL: Not Supported 00:08:41.764 SGL Metadata Address: Not Supported 00:08:41.764 SGL Offset: Not Supported 00:08:41.764 Transport SGL Data Block: Not Supported 00:08:41.764 Replay Protected Memory Block: Not Supported 00:08:41.764 00:08:41.764 Firmware Slot Information 00:08:41.764 ========================= 00:08:41.764 Active slot: 1 00:08:41.764 Slot 1 Firmware Revision: 1.0 00:08:41.764 00:08:41.764 00:08:41.764 Commands Supported and Effects 00:08:41.764 ============================== 00:08:41.764 Admin Commands 00:08:41.764 -------------- 00:08:41.764 Delete I/O Submission Queue (00h): Supported 00:08:41.764 Create I/O Submission Queue (01h): Supported 00:08:41.764 Get Log Page (02h): Supported 00:08:41.764 Delete I/O Completion Queue (04h): Supported 00:08:41.764 Create I/O Completion Queue (05h): Supported 00:08:41.764 Identify (06h): Supported 00:08:41.764 Abort (08h): Supported 00:08:41.764 Set Features (09h): Supported 00:08:41.764 Get Features (0Ah): Supported 00:08:41.764 Asynchronous Event Request (0Ch): Supported 00:08:41.764 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.764 Directive Send (19h): Supported 00:08:41.764 Directive Receive (1Ah): Supported 00:08:41.764 Virtualization Management (1Ch): Supported 00:08:41.764 Doorbell Buffer Config (7Ch): Supported 00:08:41.764 Format NVM (80h): Supported LBA-Change 00:08:41.764 I/O Commands 00:08:41.764 ------------ 00:08:41.764 Flush (00h): Supported LBA-Change 00:08:41.764 Write (01h): Supported LBA-Change 00:08:41.764 Read (02h): Supported 00:08:41.764 Compare (05h): Supported 00:08:41.764 Write Zeroes (08h): Supported LBA-Change 00:08:41.764 Dataset Management (09h): Supported LBA-Change 00:08:41.764 Unknown (0Ch): Supported 00:08:41.764 Unknown (12h): Supported 00:08:41.764 Copy (19h): Supported LBA-Change 00:08:41.764 Unknown (1Dh): Supported LBA-Change 00:08:41.764 00:08:41.764 Error Log 00:08:41.764 ========= 00:08:41.764 00:08:41.764 Arbitration 00:08:41.764 =========== 00:08:41.764 Arbitration Burst: no limit 00:08:41.764 00:08:41.764 Power Management 00:08:41.764 ================ 00:08:41.764 Number of Power States: 1 00:08:41.764 Current Power State: Power State #0 00:08:41.764 Power State #0: 00:08:41.764 Max Power: 25.00 W 00:08:41.764 Non-Operational State: Operational 00:08:41.764 Entry Latency: 16 microseconds 00:08:41.764 Exit Latency: 4 microseconds 00:08:41.764 Relative Read Throughput: 0 00:08:41.764 Relative Read Latency: 0 00:08:41.764 Relative Write Throughput: 0 00:08:41.764 Relative Write Latency: 0 00:08:41.764 Idle Power: Not Reported 00:08:41.764 Active Power: Not Reported 00:08:41.764 Non-Operational Permissive Mode: Not Supported 00:08:41.764 00:08:41.764 Health Information 00:08:41.764 ================== 00:08:41.764 Critical Warnings: 00:08:41.764 Available Spare Space: OK 00:08:41.764 Temperature: OK 00:08:41.764 Device Reliability: OK 00:08:41.764 Read Only: No 00:08:41.764 Volatile Memory Backup: OK 00:08:41.764 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.764 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.764 Available Spare: 0% 00:08:41.764 Available Spare Threshold: 0% 00:08:41.764 Life Percentage Used: 0% 00:08:41.764 Data Units Read: 762 00:08:41.764 Data Units Written: 691 00:08:41.764 Host Read Commands: 35793 00:08:41.764 Host Write Commands: 35216 00:08:41.764 Controller Busy Time: 0 minutes 00:08:41.764 Power Cycles: 0 00:08:41.764 Power On Hours: 0 hours 00:08:41.764 Unsafe Shutdowns: 0 00:08:41.764 Unrecoverable Media Errors: 0 00:08:41.764 Lifetime Error Log Entries: 0 00:08:41.764 Warning Temperature Time: 0 minutes 00:08:41.764 Critical Temperature Time: 0 minutes 00:08:41.764 00:08:41.764 Number of Queues 00:08:41.764 ================ 00:08:41.764 Number of I/O Submission Queues: 64 00:08:41.764 Number of I/O Completion Queues: 64 00:08:41.765 00:08:41.765 ZNS Specific Controller Data 00:08:41.765 ============================ 00:08:41.765 Zone Append Size Limit: 0 00:08:41.765 00:08:41.765 00:08:41.765 Active Namespaces 00:08:41.765 ================= 00:08:41.765 Namespace ID:1 00:08:41.765 Error Recovery Timeout: Unlimited 00:08:41.765 Command Set Identifier: NVM (00h) 00:08:41.765 Deallocate: Supported 00:08:41.765 Deallocated/Unwritten Error: Supported 00:08:41.765 Deallocated Read Value: All 0x00 00:08:41.765 Deallocate in Write Zeroes: Not Supported 00:08:41.765 Deallocated Guard Field: 0xFFFF 00:08:41.765 Flush: Supported 00:08:41.765 Reservation: Not Supported 00:08:41.765 Namespace Sharing Capabilities: Multiple Controllers 00:08:41.765 Size (in LBAs): 262144 (1GiB) 00:08:41.765 Capacity (in LBAs): 262144 (1GiB) 00:08:41.765 Utilization (in LBAs): 262144 (1GiB) 00:08:41.765 Thin Provisioning: Not Supported 00:08:41.765 Per-NS Atomic Units: No 00:08:41.765 Maximum Single Source Range Length: 128 00:08:41.765 Maximum Copy Length: 128 00:08:41.765 Maximum Source Range Count: 128 00:08:41.765 NGUID/EUI64 Never Reused: No 00:08:41.765 Namespace Write Protected: No 00:08:41.765 Endurance group ID: 1 00:08:41.765 Number of LBA Formats: 8 00:08:41.765 Current LBA Format: LBA Format #04 00:08:41.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.765 00:08:41.765 Get Feature FDP: 00:08:41.765 ================ 00:08:41.765 Enabled: Yes 00:08:41.765 FDP configuration index: 0 00:08:41.765 00:08:41.765 FDP configurations log page 00:08:41.765 =========================== 00:08:41.765 Number of FDP configurations: 1 00:08:41.765 Version: 0 00:08:41.765 Size: 112 00:08:41.765 FDP Configuration Descriptor: 0 00:08:41.765 Descriptor Size: 96 00:08:41.765 Reclaim Group Identifier format: 2 00:08:41.765 FDP Volatile Write Cache: Not Present 00:08:41.765 FDP Configuration: Valid 00:08:41.765 Vendor Specific Size: 0 00:08:41.765 Number of Reclaim Groups: 2 00:08:41.765 Number of Recalim Unit Handles: 8 00:08:41.765 Max Placement Identifiers: 128 00:08:41.765 Number of Namespaces Suppprted: 256 00:08:41.765 Reclaim unit Nominal Size: 6000000 bytes 00:08:41.765 Estimated Reclaim Unit Time Limit: Not Reported 00:08:41.765 RUH Desc #000: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #001: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #002: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #003: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #004: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #005: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #006: RUH Type: Initially Isolated 00:08:41.765 RUH Desc #007: RUH Type: Initially Isolated 00:08:41.765 00:08:41.765 FDP reclaim unit handle usage log page 00:08:41.765 ====================================== 00:08:41.765 Number of Reclaim Unit Handles: 8 00:08:41.765 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:41.765 RUH Usage Desc #001: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #002: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #003: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #004: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #005: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #006: RUH Attributes: Unused 00:08:41.765 RUH Usage Desc #007: RUH Attributes: Unused 00:08:41.765 00:08:41.765 FDP statistics log page 00:08:41.765 ======================= 00:08:41.765 Host bytes with metadata written: 404856832 00:08:41.765 Media bytes with metadata written: 406540288 00:08:41.765 Media bytes erased: 0 00:08:41.765 00:08:41.765 FDP events log page 00:08:41.765 =================== 00:08:41.765 Number of FDP events: 0 00:08:41.765 00:08:41.765 NVM Specific Namespace Data 00:08:41.765 =========================== 00:08:41.765 Logical Block Storage Tag Mask: 0 00:08:41.765 Protection Information Capabilities: 00:08:41.765 16b Guard Protection Information Storage Tag Support: No 00:08:41.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:41.765 Storage Tag Check Read Support: No 00:08:41.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:41.765 00:08:41.765 real 0m1.495s 00:08:41.765 user 0m0.750s 00:08:41.765 sys 0m0.548s 00:08:41.765 21:59:14 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.765 21:59:14 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 ************************************ 00:08:41.765 END TEST nvme_identify 00:08:41.765 ************************************ 00:08:41.765 21:59:14 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:41.765 21:59:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.765 21:59:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.765 21:59:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 ************************************ 00:08:41.765 START TEST nvme_perf 00:08:41.765 ************************************ 00:08:41.765 21:59:14 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:41.765 21:59:14 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:43.164 Initializing NVMe Controllers 00:08:43.164 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.164 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.164 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.164 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.164 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.164 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.164 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.164 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.164 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.164 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.164 Initialization complete. Launching workers. 00:08:43.164 ======================================================== 00:08:43.164 Latency(us) 00:08:43.164 Device Information : IOPS MiB/s Average min max 00:08:43.164 PCIE (0000:00:10.0) NSID 1 from core 0: 13970.25 163.71 9163.56 5977.15 53062.03 00:08:43.164 PCIE (0000:00:11.0) NSID 1 from core 0: 13970.25 163.71 9131.33 6051.67 50012.99 00:08:43.164 PCIE (0000:00:13.0) NSID 1 from core 0: 13970.25 163.71 9098.36 6071.24 47185.24 00:08:43.164 PCIE (0000:00:12.0) NSID 1 from core 0: 13970.25 163.71 9064.27 6114.22 44255.66 00:08:43.164 PCIE (0000:00:12.0) NSID 2 from core 0: 13970.25 163.71 9031.18 6126.54 41255.99 00:08:43.164 PCIE (0000:00:12.0) NSID 3 from core 0: 14034.04 164.46 8956.94 6106.31 29383.64 00:08:43.164 ======================================================== 00:08:43.164 Total : 83885.28 983.03 9074.18 5977.15 53062.03 00:08:43.164 00:08:43.164 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:43.164 ================================================================================= 00:08:43.164 1.00000% : 6200.714us 00:08:43.164 10.00000% : 6956.898us 00:08:43.164 25.00000% : 7763.495us 00:08:43.164 50.00000% : 8418.855us 00:08:43.164 75.00000% : 9175.040us 00:08:43.164 90.00000% : 10838.646us 00:08:43.164 95.00000% : 15224.517us 00:08:43.164 98.00000% : 18450.905us 00:08:43.165 99.00000% : 19358.326us 00:08:43.165 99.50000% : 41539.742us 00:08:43.165 99.90000% : 52428.800us 00:08:43.165 99.99000% : 53235.397us 00:08:43.165 99.99900% : 53235.397us 00:08:43.165 99.99990% : 53235.397us 00:08:43.165 99.99999% : 53235.397us 00:08:43.165 00:08:43.165 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:43.165 ================================================================================= 00:08:43.165 1.00000% : 6251.126us 00:08:43.165 10.00000% : 6956.898us 00:08:43.165 25.00000% : 7813.908us 00:08:43.165 50.00000% : 8418.855us 00:08:43.165 75.00000% : 9175.040us 00:08:43.165 90.00000% : 10737.822us 00:08:43.165 95.00000% : 14720.394us 00:08:43.165 98.00000% : 18350.080us 00:08:43.165 99.00000% : 19156.677us 00:08:43.165 99.50000% : 38716.652us 00:08:43.165 99.90000% : 49404.062us 00:08:43.165 99.99000% : 50009.009us 00:08:43.165 99.99900% : 50210.658us 00:08:43.165 99.99990% : 50210.658us 00:08:43.165 99.99999% : 50210.658us 00:08:43.165 00:08:43.165 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:43.165 ================================================================================= 00:08:43.165 1.00000% : 6251.126us 00:08:43.165 10.00000% : 6906.486us 00:08:43.165 25.00000% : 7813.908us 00:08:43.165 50.00000% : 8469.268us 00:08:43.165 75.00000% : 9175.040us 00:08:43.165 90.00000% : 10838.646us 00:08:43.165 95.00000% : 14417.920us 00:08:43.165 98.00000% : 18249.255us 00:08:43.165 99.00000% : 19156.677us 00:08:43.165 99.50000% : 35691.914us 00:08:43.165 99.90000% : 46580.972us 00:08:43.165 99.99000% : 47185.920us 00:08:43.165 99.99900% : 47185.920us 00:08:43.165 99.99990% : 47185.920us 00:08:43.165 99.99999% : 47185.920us 00:08:43.165 00:08:43.165 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:43.165 ================================================================================= 00:08:43.165 1.00000% : 6251.126us 00:08:43.165 10.00000% : 6906.486us 00:08:43.165 25.00000% : 7813.908us 00:08:43.165 50.00000% : 8469.268us 00:08:43.165 75.00000% : 9124.628us 00:08:43.165 90.00000% : 10788.234us 00:08:43.165 95.00000% : 13712.148us 00:08:43.165 98.00000% : 18249.255us 00:08:43.165 99.00000% : 19156.677us 00:08:43.165 99.50000% : 32667.175us 00:08:43.165 99.90000% : 43556.234us 00:08:43.165 99.99000% : 44362.831us 00:08:43.165 99.99900% : 44362.831us 00:08:43.165 99.99990% : 44362.831us 00:08:43.165 99.99999% : 44362.831us 00:08:43.165 00:08:43.165 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:43.165 ================================================================================= 00:08:43.165 1.00000% : 6251.126us 00:08:43.165 10.00000% : 6956.898us 00:08:43.165 25.00000% : 7813.908us 00:08:43.165 50.00000% : 8418.855us 00:08:43.165 75.00000% : 9175.040us 00:08:43.165 90.00000% : 10636.997us 00:08:43.165 95.00000% : 13006.375us 00:08:43.165 98.00000% : 18249.255us 00:08:43.165 99.00000% : 19055.852us 00:08:43.165 99.50000% : 29844.086us 00:08:43.165 99.90000% : 40531.495us 00:08:43.165 99.99000% : 41338.092us 00:08:43.165 99.99900% : 41338.092us 00:08:43.165 99.99990% : 41338.092us 00:08:43.165 99.99999% : 41338.092us 00:08:43.165 00:08:43.165 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:43.165 ================================================================================= 00:08:43.165 1.00000% : 6276.332us 00:08:43.165 10.00000% : 6956.898us 00:08:43.165 25.00000% : 7813.908us 00:08:43.165 50.00000% : 8469.268us 00:08:43.165 75.00000% : 9175.040us 00:08:43.165 90.00000% : 10788.234us 00:08:43.165 95.00000% : 13913.797us 00:08:43.165 98.00000% : 18047.606us 00:08:43.165 99.00000% : 18753.378us 00:08:43.165 99.50000% : 19257.502us 00:08:43.165 99.90000% : 28634.191us 00:08:43.165 99.99000% : 29440.788us 00:08:43.165 99.99900% : 29440.788us 00:08:43.165 99.99990% : 29440.788us 00:08:43.165 99.99999% : 29440.788us 00:08:43.165 00:08:43.165 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:43.165 ============================================================================== 00:08:43.165 Range in us Cumulative IO count 00:08:43.165 5973.858 - 5999.065: 0.0214% ( 3) 00:08:43.165 5999.065 - 6024.271: 0.0999% ( 11) 00:08:43.165 6024.271 - 6049.477: 0.1926% ( 13) 00:08:43.165 6049.477 - 6074.683: 0.2711% ( 11) 00:08:43.165 6074.683 - 6099.889: 0.4994% ( 32) 00:08:43.165 6099.889 - 6125.095: 0.6136% ( 16) 00:08:43.165 6125.095 - 6150.302: 0.7848% ( 24) 00:08:43.165 6150.302 - 6175.508: 0.9561% ( 24) 00:08:43.165 6175.508 - 6200.714: 1.1701% ( 30) 00:08:43.165 6200.714 - 6225.920: 1.3485% ( 25) 00:08:43.165 6225.920 - 6251.126: 1.5554% ( 29) 00:08:43.165 6251.126 - 6276.332: 1.8336% ( 39) 00:08:43.165 6276.332 - 6301.538: 2.1047% ( 38) 00:08:43.165 6301.538 - 6326.745: 2.2974% ( 27) 00:08:43.165 6326.745 - 6351.951: 2.5756% ( 39) 00:08:43.165 6351.951 - 6377.157: 2.8539% ( 39) 00:08:43.165 6377.157 - 6402.363: 3.0822% ( 32) 00:08:43.165 6402.363 - 6427.569: 3.3319% ( 35) 00:08:43.165 6427.569 - 6452.775: 3.5959% ( 37) 00:08:43.165 6452.775 - 6503.188: 4.1667% ( 80) 00:08:43.165 6503.188 - 6553.600: 4.7517% ( 82) 00:08:43.165 6553.600 - 6604.012: 5.3439% ( 83) 00:08:43.165 6604.012 - 6654.425: 5.9717% ( 88) 00:08:43.165 6654.425 - 6704.837: 6.6424% ( 94) 00:08:43.165 6704.837 - 6755.249: 7.3559% ( 100) 00:08:43.165 6755.249 - 6805.662: 8.1050% ( 105) 00:08:43.165 6805.662 - 6856.074: 8.8256% ( 101) 00:08:43.165 6856.074 - 6906.486: 9.5890% ( 107) 00:08:43.165 6906.486 - 6956.898: 10.3311% ( 104) 00:08:43.165 6956.898 - 7007.311: 11.1515% ( 115) 00:08:43.165 7007.311 - 7057.723: 11.9078% ( 106) 00:08:43.165 7057.723 - 7108.135: 12.4501% ( 76) 00:08:43.165 7108.135 - 7158.548: 13.0137% ( 79) 00:08:43.165 7158.548 - 7208.960: 13.6273% ( 86) 00:08:43.165 7208.960 - 7259.372: 14.3693% ( 104) 00:08:43.165 7259.372 - 7309.785: 15.1327% ( 107) 00:08:43.165 7309.785 - 7360.197: 15.8961% ( 107) 00:08:43.165 7360.197 - 7410.609: 16.7166% ( 115) 00:08:43.165 7410.609 - 7461.022: 17.6869% ( 136) 00:08:43.165 7461.022 - 7511.434: 18.6501% ( 135) 00:08:43.165 7511.434 - 7561.846: 19.7417% ( 153) 00:08:43.165 7561.846 - 7612.258: 20.9475% ( 169) 00:08:43.165 7612.258 - 7662.671: 22.3245% ( 193) 00:08:43.165 7662.671 - 7713.083: 23.7015% ( 193) 00:08:43.165 7713.083 - 7763.495: 25.0357% ( 187) 00:08:43.165 7763.495 - 7813.908: 26.3413% ( 183) 00:08:43.165 7813.908 - 7864.320: 27.7183% ( 193) 00:08:43.165 7864.320 - 7914.732: 29.3593% ( 230) 00:08:43.165 7914.732 - 7965.145: 31.1929% ( 257) 00:08:43.165 7965.145 - 8015.557: 33.0479% ( 260) 00:08:43.165 8015.557 - 8065.969: 35.0671% ( 283) 00:08:43.165 8065.969 - 8116.382: 37.1861% ( 297) 00:08:43.165 8116.382 - 8166.794: 39.3479% ( 303) 00:08:43.165 8166.794 - 8217.206: 41.6167% ( 318) 00:08:43.165 8217.206 - 8267.618: 43.9783% ( 331) 00:08:43.165 8267.618 - 8318.031: 46.1401% ( 303) 00:08:43.165 8318.031 - 8368.443: 48.2734% ( 299) 00:08:43.165 8368.443 - 8418.855: 50.3710% ( 294) 00:08:43.165 8418.855 - 8469.268: 52.4757% ( 295) 00:08:43.165 8469.268 - 8519.680: 54.5591% ( 292) 00:08:43.165 8519.680 - 8570.092: 56.5782% ( 283) 00:08:43.165 8570.092 - 8620.505: 58.5402% ( 275) 00:08:43.165 8620.505 - 8670.917: 60.4238% ( 264) 00:08:43.165 8670.917 - 8721.329: 62.0291% ( 225) 00:08:43.165 8721.329 - 8771.742: 63.7557% ( 242) 00:08:43.165 8771.742 - 8822.154: 65.5180% ( 247) 00:08:43.165 8822.154 - 8872.566: 67.2803% ( 247) 00:08:43.165 8872.566 - 8922.978: 68.8071% ( 214) 00:08:43.165 8922.978 - 8973.391: 70.2982% ( 209) 00:08:43.165 8973.391 - 9023.803: 71.6396% ( 188) 00:08:43.165 9023.803 - 9074.215: 72.8096% ( 164) 00:08:43.165 9074.215 - 9124.628: 73.8442% ( 145) 00:08:43.165 9124.628 - 9175.040: 75.0285% ( 166) 00:08:43.165 9175.040 - 9225.452: 75.8562% ( 116) 00:08:43.165 9225.452 - 9275.865: 76.7337% ( 123) 00:08:43.165 9275.865 - 9326.277: 77.5114% ( 109) 00:08:43.165 9326.277 - 9376.689: 78.2463% ( 103) 00:08:43.165 9376.689 - 9427.102: 79.0026% ( 106) 00:08:43.165 9427.102 - 9477.514: 79.7874% ( 110) 00:08:43.165 9477.514 - 9527.926: 80.5080% ( 101) 00:08:43.165 9527.926 - 9578.338: 81.1644% ( 92) 00:08:43.165 9578.338 - 9628.751: 81.8208% ( 92) 00:08:43.165 9628.751 - 9679.163: 82.4486% ( 88) 00:08:43.165 9679.163 - 9729.575: 82.9409% ( 69) 00:08:43.165 9729.575 - 9779.988: 83.4760% ( 75) 00:08:43.165 9779.988 - 9830.400: 83.9184% ( 62) 00:08:43.165 9830.400 - 9880.812: 84.4963% ( 81) 00:08:43.165 9880.812 - 9931.225: 84.8673% ( 52) 00:08:43.165 9931.225 - 9981.637: 85.3382% ( 66) 00:08:43.165 9981.637 - 10032.049: 85.7163% ( 53) 00:08:43.165 10032.049 - 10082.462: 86.1087% ( 55) 00:08:43.165 10082.462 - 10132.874: 86.5083% ( 56) 00:08:43.165 10132.874 - 10183.286: 86.8793% ( 52) 00:08:43.165 10183.286 - 10233.698: 87.2003% ( 45) 00:08:43.165 10233.698 - 10284.111: 87.5071% ( 43) 00:08:43.165 10284.111 - 10334.523: 87.8353% ( 46) 00:08:43.165 10334.523 - 10384.935: 88.1207% ( 40) 00:08:43.165 10384.935 - 10435.348: 88.4418% ( 45) 00:08:43.165 10435.348 - 10485.760: 88.7414% ( 42) 00:08:43.165 10485.760 - 10536.172: 88.9555% ( 30) 00:08:43.166 10536.172 - 10586.585: 89.1695% ( 30) 00:08:43.166 10586.585 - 10636.997: 89.3836% ( 30) 00:08:43.166 10636.997 - 10687.409: 89.5691% ( 26) 00:08:43.166 10687.409 - 10737.822: 89.7688% ( 28) 00:08:43.166 10737.822 - 10788.234: 89.9757% ( 29) 00:08:43.166 10788.234 - 10838.646: 90.2540% ( 39) 00:08:43.166 10838.646 - 10889.058: 90.3896% ( 19) 00:08:43.166 10889.058 - 10939.471: 90.5893% ( 28) 00:08:43.166 10939.471 - 10989.883: 90.7606% ( 24) 00:08:43.166 10989.883 - 11040.295: 90.9104% ( 21) 00:08:43.166 11040.295 - 11090.708: 91.0388% ( 18) 00:08:43.166 11090.708 - 11141.120: 91.1886% ( 21) 00:08:43.166 11141.120 - 11191.532: 91.3527% ( 23) 00:08:43.166 11191.532 - 11241.945: 91.4954% ( 20) 00:08:43.166 11241.945 - 11292.357: 91.6595% ( 23) 00:08:43.166 11292.357 - 11342.769: 91.8236% ( 23) 00:08:43.166 11342.769 - 11393.182: 91.9521% ( 18) 00:08:43.166 11393.182 - 11443.594: 92.1162% ( 23) 00:08:43.166 11443.594 - 11494.006: 92.2660% ( 21) 00:08:43.166 11494.006 - 11544.418: 92.3944% ( 18) 00:08:43.166 11544.418 - 11594.831: 92.5942% ( 28) 00:08:43.166 11594.831 - 11645.243: 92.7511% ( 22) 00:08:43.166 11645.243 - 11695.655: 92.9081% ( 22) 00:08:43.166 11695.655 - 11746.068: 93.0865% ( 25) 00:08:43.166 11746.068 - 11796.480: 93.2648% ( 25) 00:08:43.166 11796.480 - 11846.892: 93.4361% ( 24) 00:08:43.166 11846.892 - 11897.305: 93.5859% ( 21) 00:08:43.166 11897.305 - 11947.717: 93.7571% ( 24) 00:08:43.166 11947.717 - 11998.129: 93.8998% ( 20) 00:08:43.166 11998.129 - 12048.542: 94.0211% ( 17) 00:08:43.166 12048.542 - 12098.954: 94.1424% ( 17) 00:08:43.166 12098.954 - 12149.366: 94.2780% ( 19) 00:08:43.166 12149.366 - 12199.778: 94.3422% ( 9) 00:08:43.166 12199.778 - 12250.191: 94.4064% ( 9) 00:08:43.166 12250.191 - 12300.603: 94.4635% ( 8) 00:08:43.166 12300.603 - 12351.015: 94.5277% ( 9) 00:08:43.166 12351.015 - 12401.428: 94.5990% ( 10) 00:08:43.166 12401.428 - 12451.840: 94.6632% ( 9) 00:08:43.166 12451.840 - 12502.252: 94.6918% ( 4) 00:08:43.166 12502.252 - 12552.665: 94.7132% ( 3) 00:08:43.166 12552.665 - 12603.077: 94.7275% ( 2) 00:08:43.166 12603.077 - 12653.489: 94.7703% ( 6) 00:08:43.166 12653.489 - 12703.902: 94.7774% ( 1) 00:08:43.166 12703.902 - 12754.314: 94.7988% ( 3) 00:08:43.166 12754.314 - 12804.726: 94.8773% ( 11) 00:08:43.166 12804.726 - 12855.138: 94.8916% ( 2) 00:08:43.166 12855.138 - 12905.551: 94.9130% ( 3) 00:08:43.166 12905.551 - 13006.375: 94.9272% ( 2) 00:08:43.166 13006.375 - 13107.200: 94.9486% ( 3) 00:08:43.166 13107.200 - 13208.025: 94.9772% ( 4) 00:08:43.166 15022.868 - 15123.692: 94.9986% ( 3) 00:08:43.166 15123.692 - 15224.517: 95.0057% ( 1) 00:08:43.166 15224.517 - 15325.342: 95.0200% ( 2) 00:08:43.166 15325.342 - 15426.166: 95.0342% ( 2) 00:08:43.166 15426.166 - 15526.991: 95.0414% ( 1) 00:08:43.166 15526.991 - 15627.815: 95.0842% ( 6) 00:08:43.166 15627.815 - 15728.640: 95.0913% ( 1) 00:08:43.166 15829.465 - 15930.289: 95.1056% ( 2) 00:08:43.166 15930.289 - 16031.114: 95.1341% ( 4) 00:08:43.166 16031.114 - 16131.938: 95.2197% ( 12) 00:08:43.166 16131.938 - 16232.763: 95.3696% ( 21) 00:08:43.166 16232.763 - 16333.588: 95.5123% ( 20) 00:08:43.166 16333.588 - 16434.412: 95.6407% ( 18) 00:08:43.166 16434.412 - 16535.237: 95.7763% ( 19) 00:08:43.166 16535.237 - 16636.062: 95.8833% ( 15) 00:08:43.166 16636.062 - 16736.886: 96.0117% ( 18) 00:08:43.166 16736.886 - 16837.711: 96.1473% ( 19) 00:08:43.166 16837.711 - 16938.535: 96.2614% ( 16) 00:08:43.166 16938.535 - 17039.360: 96.3756% ( 16) 00:08:43.166 17039.360 - 17140.185: 96.5183% ( 20) 00:08:43.166 17140.185 - 17241.009: 96.6324% ( 16) 00:08:43.166 17241.009 - 17341.834: 96.7751% ( 20) 00:08:43.166 17341.834 - 17442.658: 96.9035% ( 18) 00:08:43.166 17442.658 - 17543.483: 97.0391% ( 19) 00:08:43.166 17543.483 - 17644.308: 97.1461% ( 15) 00:08:43.166 17644.308 - 17745.132: 97.2674% ( 17) 00:08:43.166 17745.132 - 17845.957: 97.3530% ( 12) 00:08:43.166 17845.957 - 17946.782: 97.4814% ( 18) 00:08:43.166 17946.782 - 18047.606: 97.6241% ( 20) 00:08:43.166 18047.606 - 18148.431: 97.7098% ( 12) 00:08:43.166 18148.431 - 18249.255: 97.8239% ( 16) 00:08:43.166 18249.255 - 18350.080: 97.9167% ( 13) 00:08:43.166 18350.080 - 18450.905: 98.0308% ( 16) 00:08:43.166 18450.905 - 18551.729: 98.1521% ( 17) 00:08:43.166 18551.729 - 18652.554: 98.2734% ( 17) 00:08:43.166 18652.554 - 18753.378: 98.3519% ( 11) 00:08:43.166 18753.378 - 18854.203: 98.4803% ( 18) 00:08:43.166 18854.203 - 18955.028: 98.6087% ( 18) 00:08:43.166 18955.028 - 19055.852: 98.7086% ( 14) 00:08:43.166 19055.852 - 19156.677: 98.8156% ( 15) 00:08:43.166 19156.677 - 19257.502: 98.9298% ( 16) 00:08:43.166 19257.502 - 19358.326: 99.0011% ( 10) 00:08:43.166 19358.326 - 19459.151: 99.0654% ( 9) 00:08:43.166 19459.151 - 19559.975: 99.0868% ( 3) 00:08:43.166 37910.055 - 38111.705: 99.1224% ( 5) 00:08:43.166 38111.705 - 38313.354: 99.1296% ( 1) 00:08:43.166 38313.354 - 38515.003: 99.1581% ( 4) 00:08:43.166 38515.003 - 38716.652: 99.1795% ( 3) 00:08:43.166 38716.652 - 38918.302: 99.2009% ( 3) 00:08:43.166 38918.302 - 39119.951: 99.2295% ( 4) 00:08:43.166 39119.951 - 39321.600: 99.2509% ( 3) 00:08:43.166 39321.600 - 39523.249: 99.2723% ( 3) 00:08:43.166 39523.249 - 39724.898: 99.3008% ( 4) 00:08:43.166 39724.898 - 39926.548: 99.3222% ( 3) 00:08:43.166 39926.548 - 40128.197: 99.3365% ( 2) 00:08:43.166 40128.197 - 40329.846: 99.3721% ( 5) 00:08:43.166 40329.846 - 40531.495: 99.3864% ( 2) 00:08:43.166 40531.495 - 40733.145: 99.4150% ( 4) 00:08:43.166 40733.145 - 40934.794: 99.4364% ( 3) 00:08:43.166 40934.794 - 41136.443: 99.4649% ( 4) 00:08:43.166 41136.443 - 41338.092: 99.4720% ( 1) 00:08:43.166 41338.092 - 41539.742: 99.5077% ( 5) 00:08:43.166 41741.391 - 41943.040: 99.5434% ( 5) 00:08:43.166 49000.763 - 49202.412: 99.5648% ( 3) 00:08:43.166 49202.412 - 49404.062: 99.5719% ( 1) 00:08:43.166 49404.062 - 49605.711: 99.6076% ( 5) 00:08:43.166 49605.711 - 49807.360: 99.6219% ( 2) 00:08:43.166 49807.360 - 50009.009: 99.6504% ( 4) 00:08:43.166 50009.009 - 50210.658: 99.6789% ( 4) 00:08:43.166 50210.658 - 50412.308: 99.6932% ( 2) 00:08:43.166 50412.308 - 50613.957: 99.7217% ( 4) 00:08:43.166 50613.957 - 50815.606: 99.7432% ( 3) 00:08:43.166 50815.606 - 51017.255: 99.7646% ( 3) 00:08:43.166 51017.255 - 51218.905: 99.7860% ( 3) 00:08:43.166 51218.905 - 51420.554: 99.8074% ( 3) 00:08:43.166 51420.554 - 51622.203: 99.8359% ( 4) 00:08:43.166 51622.203 - 52025.502: 99.8858% ( 7) 00:08:43.166 52025.502 - 52428.800: 99.9358% ( 7) 00:08:43.166 52428.800 - 52832.098: 99.9786% ( 6) 00:08:43.166 52832.098 - 53235.397: 100.0000% ( 3) 00:08:43.166 00:08:43.166 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:43.166 ============================================================================== 00:08:43.166 Range in us Cumulative IO count 00:08:43.166 6049.477 - 6074.683: 0.0357% ( 5) 00:08:43.166 6074.683 - 6099.889: 0.0928% ( 8) 00:08:43.166 6099.889 - 6125.095: 0.1427% ( 7) 00:08:43.166 6125.095 - 6150.302: 0.2783% ( 19) 00:08:43.166 6150.302 - 6175.508: 0.4994% ( 31) 00:08:43.166 6175.508 - 6200.714: 0.6707% ( 24) 00:08:43.166 6200.714 - 6225.920: 0.9275% ( 36) 00:08:43.166 6225.920 - 6251.126: 1.1986% ( 38) 00:08:43.166 6251.126 - 6276.332: 1.4269% ( 32) 00:08:43.166 6276.332 - 6301.538: 1.6553% ( 32) 00:08:43.166 6301.538 - 6326.745: 1.8693% ( 30) 00:08:43.166 6326.745 - 6351.951: 2.1475% ( 39) 00:08:43.166 6351.951 - 6377.157: 2.3759% ( 32) 00:08:43.166 6377.157 - 6402.363: 2.6184% ( 34) 00:08:43.166 6402.363 - 6427.569: 2.9752% ( 50) 00:08:43.166 6427.569 - 6452.775: 3.3248% ( 49) 00:08:43.166 6452.775 - 6503.188: 3.9098% ( 82) 00:08:43.166 6503.188 - 6553.600: 4.4592% ( 77) 00:08:43.166 6553.600 - 6604.012: 5.1655% ( 99) 00:08:43.166 6604.012 - 6654.425: 5.9717% ( 113) 00:08:43.166 6654.425 - 6704.837: 6.7352% ( 107) 00:08:43.166 6704.837 - 6755.249: 7.4201% ( 96) 00:08:43.166 6755.249 - 6805.662: 8.1835% ( 107) 00:08:43.166 6805.662 - 6856.074: 9.0183% ( 117) 00:08:43.166 6856.074 - 6906.486: 9.8459% ( 116) 00:08:43.166 6906.486 - 6956.898: 10.5950% ( 105) 00:08:43.166 6956.898 - 7007.311: 11.3156% ( 101) 00:08:43.166 7007.311 - 7057.723: 11.9649% ( 91) 00:08:43.166 7057.723 - 7108.135: 12.5642% ( 84) 00:08:43.166 7108.135 - 7158.548: 13.2277% ( 93) 00:08:43.166 7158.548 - 7208.960: 13.8199% ( 83) 00:08:43.166 7208.960 - 7259.372: 14.2979% ( 67) 00:08:43.166 7259.372 - 7309.785: 14.8830% ( 82) 00:08:43.166 7309.785 - 7360.197: 15.5751% ( 97) 00:08:43.166 7360.197 - 7410.609: 16.2457% ( 94) 00:08:43.166 7410.609 - 7461.022: 17.0519% ( 113) 00:08:43.166 7461.022 - 7511.434: 18.0080% ( 134) 00:08:43.166 7511.434 - 7561.846: 18.9426% ( 131) 00:08:43.166 7561.846 - 7612.258: 19.9843% ( 146) 00:08:43.166 7612.258 - 7662.671: 21.2543% ( 178) 00:08:43.166 7662.671 - 7713.083: 22.6884% ( 201) 00:08:43.166 7713.083 - 7763.495: 24.1938% ( 211) 00:08:43.166 7763.495 - 7813.908: 25.7705% ( 221) 00:08:43.166 7813.908 - 7864.320: 27.3545% ( 222) 00:08:43.166 7864.320 - 7914.732: 28.8385% ( 208) 00:08:43.166 7914.732 - 7965.145: 30.5865% ( 245) 00:08:43.166 7965.145 - 8015.557: 32.4415% ( 260) 00:08:43.166 8015.557 - 8065.969: 34.4749% ( 285) 00:08:43.166 8065.969 - 8116.382: 36.6153% ( 300) 00:08:43.166 8116.382 - 8166.794: 38.7914% ( 305) 00:08:43.166 8166.794 - 8217.206: 41.0245% ( 313) 00:08:43.166 8217.206 - 8267.618: 43.2862% ( 317) 00:08:43.166 8267.618 - 8318.031: 45.7192% ( 341) 00:08:43.166 8318.031 - 8368.443: 47.9381% ( 311) 00:08:43.167 8368.443 - 8418.855: 50.2568% ( 325) 00:08:43.167 8418.855 - 8469.268: 52.5328% ( 319) 00:08:43.167 8469.268 - 8519.680: 54.6162% ( 292) 00:08:43.167 8519.680 - 8570.092: 56.6495% ( 285) 00:08:43.167 8570.092 - 8620.505: 58.6829% ( 285) 00:08:43.167 8620.505 - 8670.917: 60.7235% ( 286) 00:08:43.167 8670.917 - 8721.329: 62.6142% ( 265) 00:08:43.167 8721.329 - 8771.742: 64.4406% ( 256) 00:08:43.167 8771.742 - 8822.154: 66.2600% ( 255) 00:08:43.167 8822.154 - 8872.566: 68.0579% ( 252) 00:08:43.167 8872.566 - 8922.978: 69.5990% ( 216) 00:08:43.167 8922.978 - 8973.391: 71.0331% ( 201) 00:08:43.167 8973.391 - 9023.803: 72.3459% ( 184) 00:08:43.167 9023.803 - 9074.215: 73.5731% ( 172) 00:08:43.167 9074.215 - 9124.628: 74.7432% ( 164) 00:08:43.167 9124.628 - 9175.040: 75.7491% ( 141) 00:08:43.167 9175.040 - 9225.452: 76.6196% ( 122) 00:08:43.167 9225.452 - 9275.865: 77.3687% ( 105) 00:08:43.167 9275.865 - 9326.277: 78.1107% ( 104) 00:08:43.167 9326.277 - 9376.689: 78.8670% ( 106) 00:08:43.167 9376.689 - 9427.102: 79.5591% ( 97) 00:08:43.167 9427.102 - 9477.514: 80.2012% ( 90) 00:08:43.167 9477.514 - 9527.926: 80.8148% ( 86) 00:08:43.167 9527.926 - 9578.338: 81.3499% ( 75) 00:08:43.167 9578.338 - 9628.751: 81.8993% ( 77) 00:08:43.167 9628.751 - 9679.163: 82.3844% ( 68) 00:08:43.167 9679.163 - 9729.575: 82.8268% ( 62) 00:08:43.167 9729.575 - 9779.988: 83.2477% ( 59) 00:08:43.167 9779.988 - 9830.400: 83.6259% ( 53) 00:08:43.167 9830.400 - 9880.812: 84.0254% ( 56) 00:08:43.167 9880.812 - 9931.225: 84.4606% ( 61) 00:08:43.167 9931.225 - 9981.637: 84.9101% ( 63) 00:08:43.167 9981.637 - 10032.049: 85.3168% ( 57) 00:08:43.167 10032.049 - 10082.462: 85.7591% ( 62) 00:08:43.167 10082.462 - 10132.874: 86.2015% ( 62) 00:08:43.167 10132.874 - 10183.286: 86.6153% ( 58) 00:08:43.167 10183.286 - 10233.698: 87.0291% ( 58) 00:08:43.167 10233.698 - 10284.111: 87.3787% ( 49) 00:08:43.167 10284.111 - 10334.523: 87.7212% ( 48) 00:08:43.167 10334.523 - 10384.935: 88.0422% ( 45) 00:08:43.167 10384.935 - 10435.348: 88.4204% ( 53) 00:08:43.167 10435.348 - 10485.760: 88.7414% ( 45) 00:08:43.167 10485.760 - 10536.172: 89.0482% ( 43) 00:08:43.167 10536.172 - 10586.585: 89.3265% ( 39) 00:08:43.167 10586.585 - 10636.997: 89.6190% ( 41) 00:08:43.167 10636.997 - 10687.409: 89.8901% ( 38) 00:08:43.167 10687.409 - 10737.822: 90.1327% ( 34) 00:08:43.167 10737.822 - 10788.234: 90.3396% ( 29) 00:08:43.167 10788.234 - 10838.646: 90.5394% ( 28) 00:08:43.167 10838.646 - 10889.058: 90.7463% ( 29) 00:08:43.167 10889.058 - 10939.471: 90.9247% ( 25) 00:08:43.167 10939.471 - 10989.883: 91.1102% ( 26) 00:08:43.167 10989.883 - 11040.295: 91.2600% ( 21) 00:08:43.167 11040.295 - 11090.708: 91.3884% ( 18) 00:08:43.167 11090.708 - 11141.120: 91.5382% ( 21) 00:08:43.167 11141.120 - 11191.532: 91.7095% ( 24) 00:08:43.167 11191.532 - 11241.945: 91.9021% ( 27) 00:08:43.167 11241.945 - 11292.357: 92.0519% ( 21) 00:08:43.167 11292.357 - 11342.769: 92.2089% ( 22) 00:08:43.167 11342.769 - 11393.182: 92.3445% ( 19) 00:08:43.167 11393.182 - 11443.594: 92.4658% ( 17) 00:08:43.167 11443.594 - 11494.006: 92.5942% ( 18) 00:08:43.167 11494.006 - 11544.418: 92.7083% ( 16) 00:08:43.167 11544.418 - 11594.831: 92.8368% ( 18) 00:08:43.167 11594.831 - 11645.243: 92.9866% ( 21) 00:08:43.167 11645.243 - 11695.655: 93.1507% ( 23) 00:08:43.167 11695.655 - 11746.068: 93.2934% ( 20) 00:08:43.167 11746.068 - 11796.480: 93.4218% ( 18) 00:08:43.167 11796.480 - 11846.892: 93.6073% ( 26) 00:08:43.167 11846.892 - 11897.305: 93.7714% ( 23) 00:08:43.167 11897.305 - 11947.717: 93.9284% ( 22) 00:08:43.167 11947.717 - 11998.129: 94.0925% ( 23) 00:08:43.167 11998.129 - 12048.542: 94.2280% ( 19) 00:08:43.167 12048.542 - 12098.954: 94.3208% ( 13) 00:08:43.167 12098.954 - 12149.366: 94.3850% ( 9) 00:08:43.167 12149.366 - 12199.778: 94.4635% ( 11) 00:08:43.167 12199.778 - 12250.191: 94.5277% ( 9) 00:08:43.167 12250.191 - 12300.603: 94.6062% ( 11) 00:08:43.167 12300.603 - 12351.015: 94.6775% ( 10) 00:08:43.167 12351.015 - 12401.428: 94.7489% ( 10) 00:08:43.167 12401.428 - 12451.840: 94.8345% ( 12) 00:08:43.167 12451.840 - 12502.252: 94.8630% ( 4) 00:08:43.167 12502.252 - 12552.665: 94.8844% ( 3) 00:08:43.167 12552.665 - 12603.077: 94.9058% ( 3) 00:08:43.167 12603.077 - 12653.489: 94.9272% ( 3) 00:08:43.167 12653.489 - 12703.902: 94.9486% ( 3) 00:08:43.167 12703.902 - 12754.314: 94.9772% ( 4) 00:08:43.167 14518.745 - 14619.569: 94.9914% ( 2) 00:08:43.167 14619.569 - 14720.394: 95.0128% ( 3) 00:08:43.167 14720.394 - 14821.218: 95.0414% ( 4) 00:08:43.167 14821.218 - 14922.043: 95.0628% ( 3) 00:08:43.167 14922.043 - 15022.868: 95.0913% ( 4) 00:08:43.167 15022.868 - 15123.692: 95.1199% ( 4) 00:08:43.167 15123.692 - 15224.517: 95.1413% ( 3) 00:08:43.167 15224.517 - 15325.342: 95.1627% ( 3) 00:08:43.167 15325.342 - 15426.166: 95.1912% ( 4) 00:08:43.167 15426.166 - 15526.991: 95.2126% ( 3) 00:08:43.167 15526.991 - 15627.815: 95.2412% ( 4) 00:08:43.167 15627.815 - 15728.640: 95.2626% ( 3) 00:08:43.167 15728.640 - 15829.465: 95.2911% ( 4) 00:08:43.167 15829.465 - 15930.289: 95.3125% ( 3) 00:08:43.167 15930.289 - 16031.114: 95.3410% ( 4) 00:08:43.167 16031.114 - 16131.938: 95.3624% ( 3) 00:08:43.167 16131.938 - 16232.763: 95.3910% ( 4) 00:08:43.167 16232.763 - 16333.588: 95.4409% ( 7) 00:08:43.167 16333.588 - 16434.412: 95.5265% ( 12) 00:08:43.167 16434.412 - 16535.237: 95.6621% ( 19) 00:08:43.167 16535.237 - 16636.062: 95.8333% ( 24) 00:08:43.167 16636.062 - 16736.886: 95.9618% ( 18) 00:08:43.167 16736.886 - 16837.711: 96.1187% ( 22) 00:08:43.167 16837.711 - 16938.535: 96.2329% ( 16) 00:08:43.167 16938.535 - 17039.360: 96.3756% ( 20) 00:08:43.167 17039.360 - 17140.185: 96.5183% ( 20) 00:08:43.167 17140.185 - 17241.009: 96.6752% ( 22) 00:08:43.167 17241.009 - 17341.834: 96.8179% ( 20) 00:08:43.167 17341.834 - 17442.658: 96.9392% ( 17) 00:08:43.167 17442.658 - 17543.483: 97.0890% ( 21) 00:08:43.167 17543.483 - 17644.308: 97.2103% ( 17) 00:08:43.167 17644.308 - 17745.132: 97.3316% ( 17) 00:08:43.167 17745.132 - 17845.957: 97.4672% ( 19) 00:08:43.167 17845.957 - 17946.782: 97.5885% ( 17) 00:08:43.167 17946.782 - 18047.606: 97.7240% ( 19) 00:08:43.167 18047.606 - 18148.431: 97.8596% ( 19) 00:08:43.167 18148.431 - 18249.255: 97.9737% ( 16) 00:08:43.167 18249.255 - 18350.080: 98.1164% ( 20) 00:08:43.167 18350.080 - 18450.905: 98.2520% ( 19) 00:08:43.167 18450.905 - 18551.729: 98.3662% ( 16) 00:08:43.167 18551.729 - 18652.554: 98.5017% ( 19) 00:08:43.167 18652.554 - 18753.378: 98.6444% ( 20) 00:08:43.167 18753.378 - 18854.203: 98.7728% ( 18) 00:08:43.167 18854.203 - 18955.028: 98.8799% ( 15) 00:08:43.167 18955.028 - 19055.852: 98.9869% ( 15) 00:08:43.167 19055.852 - 19156.677: 99.0368% ( 7) 00:08:43.167 19156.677 - 19257.502: 99.0725% ( 5) 00:08:43.167 19257.502 - 19358.326: 99.0868% ( 2) 00:08:43.167 35288.615 - 35490.265: 99.1010% ( 2) 00:08:43.167 35490.265 - 35691.914: 99.1296% ( 4) 00:08:43.167 35691.914 - 35893.563: 99.1581% ( 4) 00:08:43.167 35893.563 - 36095.212: 99.1795% ( 3) 00:08:43.167 36095.212 - 36296.862: 99.2080% ( 4) 00:08:43.167 36296.862 - 36498.511: 99.2295% ( 3) 00:08:43.167 36498.511 - 36700.160: 99.2580% ( 4) 00:08:43.167 36700.160 - 36901.809: 99.2865% ( 4) 00:08:43.167 36901.809 - 37103.458: 99.3079% ( 3) 00:08:43.167 37103.458 - 37305.108: 99.3365% ( 4) 00:08:43.167 37305.108 - 37506.757: 99.3650% ( 4) 00:08:43.167 37506.757 - 37708.406: 99.3864% ( 3) 00:08:43.167 37708.406 - 37910.055: 99.4150% ( 4) 00:08:43.167 37910.055 - 38111.705: 99.4435% ( 4) 00:08:43.167 38111.705 - 38313.354: 99.4649% ( 3) 00:08:43.167 38313.354 - 38515.003: 99.4934% ( 4) 00:08:43.167 38515.003 - 38716.652: 99.5148% ( 3) 00:08:43.167 38716.652 - 38918.302: 99.5434% ( 4) 00:08:43.167 46379.323 - 46580.972: 99.5505% ( 1) 00:08:43.167 46580.972 - 46782.622: 99.5791% ( 4) 00:08:43.167 46782.622 - 46984.271: 99.6076% ( 4) 00:08:43.167 46984.271 - 47185.920: 99.6290% ( 3) 00:08:43.167 47185.920 - 47387.569: 99.6575% ( 4) 00:08:43.167 47387.569 - 47589.218: 99.6789% ( 3) 00:08:43.167 47589.218 - 47790.868: 99.7075% ( 4) 00:08:43.167 47790.868 - 47992.517: 99.7360% ( 4) 00:08:43.167 47992.517 - 48194.166: 99.7574% ( 3) 00:08:43.167 48194.166 - 48395.815: 99.7860% ( 4) 00:08:43.167 48395.815 - 48597.465: 99.8145% ( 4) 00:08:43.167 48597.465 - 48799.114: 99.8359% ( 3) 00:08:43.167 48799.114 - 49000.763: 99.8644% ( 4) 00:08:43.167 49000.763 - 49202.412: 99.8858% ( 3) 00:08:43.167 49202.412 - 49404.062: 99.9144% ( 4) 00:08:43.167 49404.062 - 49605.711: 99.9429% ( 4) 00:08:43.167 49605.711 - 49807.360: 99.9715% ( 4) 00:08:43.167 49807.360 - 50009.009: 99.9929% ( 3) 00:08:43.167 50009.009 - 50210.658: 100.0000% ( 1) 00:08:43.167 00:08:43.167 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:43.167 ============================================================================== 00:08:43.167 Range in us Cumulative IO count 00:08:43.167 6049.477 - 6074.683: 0.0071% ( 1) 00:08:43.167 6074.683 - 6099.889: 0.0856% ( 11) 00:08:43.167 6099.889 - 6125.095: 0.2069% ( 17) 00:08:43.167 6125.095 - 6150.302: 0.3639% ( 22) 00:08:43.167 6150.302 - 6175.508: 0.5137% ( 21) 00:08:43.167 6175.508 - 6200.714: 0.6921% ( 25) 00:08:43.167 6200.714 - 6225.920: 0.8776% ( 26) 00:08:43.167 6225.920 - 6251.126: 1.1772% ( 42) 00:08:43.167 6251.126 - 6276.332: 1.4555% ( 39) 00:08:43.167 6276.332 - 6301.538: 1.7837% ( 46) 00:08:43.167 6301.538 - 6326.745: 2.0762% ( 41) 00:08:43.167 6326.745 - 6351.951: 2.3188% ( 34) 00:08:43.167 6351.951 - 6377.157: 2.6398% ( 45) 00:08:43.168 6377.157 - 6402.363: 3.0251% ( 54) 00:08:43.168 6402.363 - 6427.569: 3.3890% ( 51) 00:08:43.168 6427.569 - 6452.775: 3.7172% ( 46) 00:08:43.168 6452.775 - 6503.188: 4.3094% ( 83) 00:08:43.168 6503.188 - 6553.600: 5.0086% ( 98) 00:08:43.168 6553.600 - 6604.012: 5.7149% ( 99) 00:08:43.168 6604.012 - 6654.425: 6.4355% ( 101) 00:08:43.168 6654.425 - 6704.837: 7.0848% ( 91) 00:08:43.168 6704.837 - 6755.249: 7.7483% ( 93) 00:08:43.168 6755.249 - 6805.662: 8.4832% ( 103) 00:08:43.168 6805.662 - 6856.074: 9.3393% ( 120) 00:08:43.168 6856.074 - 6906.486: 10.2526% ( 128) 00:08:43.168 6906.486 - 6956.898: 11.1373% ( 124) 00:08:43.168 6956.898 - 7007.311: 11.9720% ( 117) 00:08:43.168 7007.311 - 7057.723: 12.7212% ( 105) 00:08:43.168 7057.723 - 7108.135: 13.3776% ( 92) 00:08:43.168 7108.135 - 7158.548: 14.0197% ( 90) 00:08:43.168 7158.548 - 7208.960: 14.5762% ( 78) 00:08:43.168 7208.960 - 7259.372: 15.0970% ( 73) 00:08:43.168 7259.372 - 7309.785: 15.6963% ( 84) 00:08:43.168 7309.785 - 7360.197: 16.3955% ( 98) 00:08:43.168 7360.197 - 7410.609: 17.2517% ( 120) 00:08:43.168 7410.609 - 7461.022: 18.0651% ( 114) 00:08:43.168 7461.022 - 7511.434: 18.9426% ( 123) 00:08:43.168 7511.434 - 7561.846: 19.8987% ( 134) 00:08:43.168 7561.846 - 7612.258: 21.0046% ( 155) 00:08:43.168 7612.258 - 7662.671: 22.1604% ( 162) 00:08:43.168 7662.671 - 7713.083: 23.3376% ( 165) 00:08:43.168 7713.083 - 7763.495: 24.6861% ( 189) 00:08:43.168 7763.495 - 7813.908: 26.0060% ( 185) 00:08:43.168 7813.908 - 7864.320: 27.4187% ( 198) 00:08:43.168 7864.320 - 7914.732: 28.8955% ( 207) 00:08:43.168 7914.732 - 7965.145: 30.5080% ( 226) 00:08:43.168 7965.145 - 8015.557: 32.5342% ( 284) 00:08:43.168 8015.557 - 8065.969: 34.5106% ( 277) 00:08:43.168 8065.969 - 8116.382: 36.5083% ( 280) 00:08:43.168 8116.382 - 8166.794: 38.6201% ( 296) 00:08:43.168 8166.794 - 8217.206: 40.8105% ( 307) 00:08:43.168 8217.206 - 8267.618: 43.0294% ( 311) 00:08:43.168 8267.618 - 8318.031: 45.3339% ( 323) 00:08:43.168 8318.031 - 8368.443: 47.6455% ( 324) 00:08:43.168 8368.443 - 8418.855: 49.9572% ( 324) 00:08:43.168 8418.855 - 8469.268: 52.1975% ( 314) 00:08:43.168 8469.268 - 8519.680: 54.4307% ( 313) 00:08:43.168 8519.680 - 8570.092: 56.6638% ( 313) 00:08:43.168 8570.092 - 8620.505: 58.7257% ( 289) 00:08:43.168 8620.505 - 8670.917: 60.9375% ( 310) 00:08:43.168 8670.917 - 8721.329: 63.0208% ( 292) 00:08:43.168 8721.329 - 8771.742: 64.9829% ( 275) 00:08:43.168 8771.742 - 8822.154: 66.7594% ( 249) 00:08:43.168 8822.154 - 8872.566: 68.3148% ( 218) 00:08:43.168 8872.566 - 8922.978: 69.8630% ( 217) 00:08:43.168 8922.978 - 8973.391: 71.3256% ( 205) 00:08:43.168 8973.391 - 9023.803: 72.6241% ( 182) 00:08:43.168 9023.803 - 9074.215: 73.7443% ( 157) 00:08:43.168 9074.215 - 9124.628: 74.7503% ( 141) 00:08:43.168 9124.628 - 9175.040: 75.6564% ( 127) 00:08:43.168 9175.040 - 9225.452: 76.4198% ( 107) 00:08:43.168 9225.452 - 9275.865: 77.3330% ( 128) 00:08:43.168 9275.865 - 9326.277: 78.1535% ( 115) 00:08:43.168 9326.277 - 9376.689: 78.8670% ( 100) 00:08:43.168 9376.689 - 9427.102: 79.5234% ( 92) 00:08:43.168 9427.102 - 9477.514: 80.1513% ( 88) 00:08:43.168 9477.514 - 9527.926: 80.7934% ( 90) 00:08:43.168 9527.926 - 9578.338: 81.3642% ( 80) 00:08:43.168 9578.338 - 9628.751: 81.8707% ( 71) 00:08:43.168 9628.751 - 9679.163: 82.3630% ( 69) 00:08:43.168 9679.163 - 9729.575: 82.8767% ( 72) 00:08:43.168 9729.575 - 9779.988: 83.3476% ( 66) 00:08:43.168 9779.988 - 9830.400: 83.7971% ( 63) 00:08:43.168 9830.400 - 9880.812: 84.1895% ( 55) 00:08:43.168 9880.812 - 9931.225: 84.5534% ( 51) 00:08:43.168 9931.225 - 9981.637: 84.9672% ( 58) 00:08:43.168 9981.637 - 10032.049: 85.3525% ( 54) 00:08:43.168 10032.049 - 10082.462: 85.6236% ( 38) 00:08:43.168 10082.462 - 10132.874: 85.8804% ( 36) 00:08:43.168 10132.874 - 10183.286: 86.2015% ( 45) 00:08:43.168 10183.286 - 10233.698: 86.4655% ( 37) 00:08:43.168 10233.698 - 10284.111: 86.7509% ( 40) 00:08:43.168 10284.111 - 10334.523: 87.0006% ( 35) 00:08:43.168 10334.523 - 10384.935: 87.2860% ( 40) 00:08:43.168 10384.935 - 10435.348: 87.6356% ( 49) 00:08:43.168 10435.348 - 10485.760: 87.9923% ( 50) 00:08:43.168 10485.760 - 10536.172: 88.3205% ( 46) 00:08:43.168 10536.172 - 10586.585: 88.6273% ( 43) 00:08:43.168 10586.585 - 10636.997: 88.9626% ( 47) 00:08:43.168 10636.997 - 10687.409: 89.2765% ( 44) 00:08:43.168 10687.409 - 10737.822: 89.6261% ( 49) 00:08:43.168 10737.822 - 10788.234: 89.9472% ( 45) 00:08:43.168 10788.234 - 10838.646: 90.2611% ( 44) 00:08:43.168 10838.646 - 10889.058: 90.5251% ( 37) 00:08:43.168 10889.058 - 10939.471: 90.7606% ( 33) 00:08:43.168 10939.471 - 10989.883: 90.9817% ( 31) 00:08:43.168 10989.883 - 11040.295: 91.2029% ( 31) 00:08:43.168 11040.295 - 11090.708: 91.4170% ( 30) 00:08:43.168 11090.708 - 11141.120: 91.6167% ( 28) 00:08:43.168 11141.120 - 11191.532: 91.7666% ( 21) 00:08:43.168 11191.532 - 11241.945: 91.9592% ( 27) 00:08:43.168 11241.945 - 11292.357: 92.1732% ( 30) 00:08:43.168 11292.357 - 11342.769: 92.3516% ( 25) 00:08:43.168 11342.769 - 11393.182: 92.5300% ( 25) 00:08:43.168 11393.182 - 11443.594: 92.7012% ( 24) 00:08:43.168 11443.594 - 11494.006: 92.8653% ( 23) 00:08:43.168 11494.006 - 11544.418: 93.0080% ( 20) 00:08:43.168 11544.418 - 11594.831: 93.1150% ( 15) 00:08:43.168 11594.831 - 11645.243: 93.2078% ( 13) 00:08:43.168 11645.243 - 11695.655: 93.3219% ( 16) 00:08:43.168 11695.655 - 11746.068: 93.4575% ( 19) 00:08:43.168 11746.068 - 11796.480: 93.5502% ( 13) 00:08:43.168 11796.480 - 11846.892: 93.6430% ( 13) 00:08:43.168 11846.892 - 11897.305: 93.7500% ( 15) 00:08:43.168 11897.305 - 11947.717: 93.8856% ( 19) 00:08:43.168 11947.717 - 11998.129: 93.9783% ( 13) 00:08:43.168 11998.129 - 12048.542: 94.0568% ( 11) 00:08:43.168 12048.542 - 12098.954: 94.1495% ( 13) 00:08:43.168 12098.954 - 12149.366: 94.2280% ( 11) 00:08:43.168 12149.366 - 12199.778: 94.2994% ( 10) 00:08:43.168 12199.778 - 12250.191: 94.3707% ( 10) 00:08:43.168 12250.191 - 12300.603: 94.4563% ( 12) 00:08:43.168 12300.603 - 12351.015: 94.5277% ( 10) 00:08:43.168 12351.015 - 12401.428: 94.5990% ( 10) 00:08:43.168 12401.428 - 12451.840: 94.6846% ( 12) 00:08:43.168 12451.840 - 12502.252: 94.7489% ( 9) 00:08:43.168 12502.252 - 12552.665: 94.7988% ( 7) 00:08:43.168 12552.665 - 12603.077: 94.8487% ( 7) 00:08:43.168 12603.077 - 12653.489: 94.8987% ( 7) 00:08:43.168 12653.489 - 12703.902: 94.9486% ( 7) 00:08:43.168 12703.902 - 12754.314: 94.9700% ( 3) 00:08:43.168 12754.314 - 12804.726: 94.9772% ( 1) 00:08:43.168 14317.095 - 14417.920: 95.0057% ( 4) 00:08:43.168 14417.920 - 14518.745: 95.0200% ( 2) 00:08:43.168 14518.745 - 14619.569: 95.0342% ( 2) 00:08:43.168 14619.569 - 14720.394: 95.0557% ( 3) 00:08:43.168 14720.394 - 14821.218: 95.0699% ( 2) 00:08:43.168 14821.218 - 14922.043: 95.0985% ( 4) 00:08:43.168 14922.043 - 15022.868: 95.1127% ( 2) 00:08:43.168 15022.868 - 15123.692: 95.1270% ( 2) 00:08:43.168 15123.692 - 15224.517: 95.1413% ( 2) 00:08:43.168 15224.517 - 15325.342: 95.1698% ( 4) 00:08:43.168 15325.342 - 15426.166: 95.1912% ( 3) 00:08:43.168 15426.166 - 15526.991: 95.2197% ( 4) 00:08:43.168 15526.991 - 15627.815: 95.2412% ( 3) 00:08:43.168 15627.815 - 15728.640: 95.2697% ( 4) 00:08:43.168 15728.640 - 15829.465: 95.2911% ( 3) 00:08:43.168 15829.465 - 15930.289: 95.3196% ( 4) 00:08:43.168 15930.289 - 16031.114: 95.3410% ( 3) 00:08:43.168 16031.114 - 16131.938: 95.3696% ( 4) 00:08:43.168 16131.938 - 16232.763: 95.4124% ( 6) 00:08:43.168 16232.763 - 16333.588: 95.4837% ( 10) 00:08:43.168 16333.588 - 16434.412: 95.5908% ( 15) 00:08:43.168 16434.412 - 16535.237: 95.7477% ( 22) 00:08:43.168 16535.237 - 16636.062: 95.8619% ( 16) 00:08:43.168 16636.062 - 16736.886: 95.9974% ( 19) 00:08:43.168 16736.886 - 16837.711: 96.1330% ( 19) 00:08:43.168 16837.711 - 16938.535: 96.2828% ( 21) 00:08:43.168 16938.535 - 17039.360: 96.4255% ( 20) 00:08:43.168 17039.360 - 17140.185: 96.5468% ( 17) 00:08:43.168 17140.185 - 17241.009: 96.6966% ( 21) 00:08:43.168 17241.009 - 17341.834: 96.8322% ( 19) 00:08:43.168 17341.834 - 17442.658: 96.9606% ( 18) 00:08:43.168 17442.658 - 17543.483: 97.1033% ( 20) 00:08:43.168 17543.483 - 17644.308: 97.2317% ( 18) 00:08:43.168 17644.308 - 17745.132: 97.3530% ( 17) 00:08:43.168 17745.132 - 17845.957: 97.4886% ( 19) 00:08:43.168 17845.957 - 17946.782: 97.6241% ( 19) 00:08:43.168 17946.782 - 18047.606: 97.7526% ( 18) 00:08:43.168 18047.606 - 18148.431: 97.8739% ( 17) 00:08:43.168 18148.431 - 18249.255: 98.0166% ( 20) 00:08:43.168 18249.255 - 18350.080: 98.1307% ( 16) 00:08:43.168 18350.080 - 18450.905: 98.2805% ( 21) 00:08:43.168 18450.905 - 18551.729: 98.4090% ( 18) 00:08:43.168 18551.729 - 18652.554: 98.5303% ( 17) 00:08:43.168 18652.554 - 18753.378: 98.6658% ( 19) 00:08:43.168 18753.378 - 18854.203: 98.7800% ( 16) 00:08:43.168 18854.203 - 18955.028: 98.8941% ( 16) 00:08:43.168 18955.028 - 19055.852: 98.9869% ( 13) 00:08:43.168 19055.852 - 19156.677: 99.0511% ( 9) 00:08:43.168 19156.677 - 19257.502: 99.0796% ( 4) 00:08:43.168 19257.502 - 19358.326: 99.0868% ( 1) 00:08:43.168 32263.877 - 32465.526: 99.0939% ( 1) 00:08:43.168 32465.526 - 32667.175: 99.1153% ( 3) 00:08:43.168 32667.175 - 32868.825: 99.1367% ( 3) 00:08:43.168 32868.825 - 33070.474: 99.1652% ( 4) 00:08:43.168 33070.474 - 33272.123: 99.1938% ( 4) 00:08:43.168 33272.123 - 33473.772: 99.2152% ( 3) 00:08:43.168 33473.772 - 33675.422: 99.2437% ( 4) 00:08:43.168 33675.422 - 33877.071: 99.2651% ( 3) 00:08:43.168 33877.071 - 34078.720: 99.2937% ( 4) 00:08:43.168 34078.720 - 34280.369: 99.3151% ( 3) 00:08:43.169 34280.369 - 34482.018: 99.3436% ( 4) 00:08:43.169 34482.018 - 34683.668: 99.3721% ( 4) 00:08:43.169 34683.668 - 34885.317: 99.3936% ( 3) 00:08:43.169 34885.317 - 35086.966: 99.4221% ( 4) 00:08:43.169 35086.966 - 35288.615: 99.4506% ( 4) 00:08:43.169 35288.615 - 35490.265: 99.4720% ( 3) 00:08:43.169 35490.265 - 35691.914: 99.5006% ( 4) 00:08:43.169 35691.914 - 35893.563: 99.5291% ( 4) 00:08:43.169 35893.563 - 36095.212: 99.5434% ( 2) 00:08:43.169 43556.234 - 43757.883: 99.5576% ( 2) 00:08:43.169 43757.883 - 43959.532: 99.5862% ( 4) 00:08:43.169 43959.532 - 44161.182: 99.6076% ( 3) 00:08:43.169 44161.182 - 44362.831: 99.6361% ( 4) 00:08:43.169 44362.831 - 44564.480: 99.6575% ( 3) 00:08:43.169 44564.480 - 44766.129: 99.6861% ( 4) 00:08:43.169 44766.129 - 44967.778: 99.7075% ( 3) 00:08:43.169 44967.778 - 45169.428: 99.7360% ( 4) 00:08:43.169 45169.428 - 45371.077: 99.7574% ( 3) 00:08:43.169 45371.077 - 45572.726: 99.7860% ( 4) 00:08:43.169 45572.726 - 45774.375: 99.8145% ( 4) 00:08:43.169 45774.375 - 45976.025: 99.8359% ( 3) 00:08:43.169 45976.025 - 46177.674: 99.8644% ( 4) 00:08:43.169 46177.674 - 46379.323: 99.8930% ( 4) 00:08:43.169 46379.323 - 46580.972: 99.9215% ( 4) 00:08:43.169 46580.972 - 46782.622: 99.9429% ( 3) 00:08:43.169 46782.622 - 46984.271: 99.9715% ( 4) 00:08:43.169 46984.271 - 47185.920: 100.0000% ( 4) 00:08:43.169 00:08:43.169 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:43.169 ============================================================================== 00:08:43.169 Range in us Cumulative IO count 00:08:43.169 6099.889 - 6125.095: 0.0214% ( 3) 00:08:43.169 6125.095 - 6150.302: 0.0785% ( 8) 00:08:43.169 6150.302 - 6175.508: 0.1498% ( 10) 00:08:43.169 6175.508 - 6200.714: 0.3211% ( 24) 00:08:43.169 6200.714 - 6225.920: 0.6207% ( 42) 00:08:43.169 6225.920 - 6251.126: 1.0203% ( 56) 00:08:43.169 6251.126 - 6276.332: 1.4055% ( 54) 00:08:43.169 6276.332 - 6301.538: 1.6838% ( 39) 00:08:43.169 6301.538 - 6326.745: 1.8836% ( 28) 00:08:43.169 6326.745 - 6351.951: 2.0691% ( 26) 00:08:43.169 6351.951 - 6377.157: 2.3330% ( 37) 00:08:43.169 6377.157 - 6402.363: 2.7112% ( 53) 00:08:43.169 6402.363 - 6427.569: 3.0608% ( 49) 00:08:43.169 6427.569 - 6452.775: 3.3890% ( 46) 00:08:43.169 6452.775 - 6503.188: 4.0026% ( 86) 00:08:43.169 6503.188 - 6553.600: 4.6019% ( 84) 00:08:43.169 6553.600 - 6604.012: 5.3225% ( 101) 00:08:43.169 6604.012 - 6654.425: 6.0288% ( 99) 00:08:43.169 6654.425 - 6704.837: 6.7494% ( 101) 00:08:43.169 6704.837 - 6755.249: 7.5057% ( 106) 00:08:43.169 6755.249 - 6805.662: 8.2620% ( 106) 00:08:43.169 6805.662 - 6856.074: 9.0682% ( 113) 00:08:43.169 6856.074 - 6906.486: 10.0314% ( 135) 00:08:43.169 6906.486 - 6956.898: 10.8733% ( 118) 00:08:43.169 6956.898 - 7007.311: 11.6438% ( 108) 00:08:43.169 7007.311 - 7057.723: 12.3573% ( 100) 00:08:43.169 7057.723 - 7108.135: 13.1421% ( 110) 00:08:43.169 7108.135 - 7158.548: 13.8485% ( 99) 00:08:43.169 7158.548 - 7208.960: 14.3978% ( 77) 00:08:43.169 7208.960 - 7259.372: 15.0328% ( 89) 00:08:43.169 7259.372 - 7309.785: 15.8390% ( 113) 00:08:43.169 7309.785 - 7360.197: 16.5026% ( 93) 00:08:43.169 7360.197 - 7410.609: 17.2803% ( 109) 00:08:43.169 7410.609 - 7461.022: 18.0936% ( 114) 00:08:43.169 7461.022 - 7511.434: 18.9854% ( 125) 00:08:43.169 7511.434 - 7561.846: 19.9201% ( 131) 00:08:43.169 7561.846 - 7612.258: 20.8761% ( 134) 00:08:43.169 7612.258 - 7662.671: 22.1747% ( 182) 00:08:43.169 7662.671 - 7713.083: 23.3662% ( 167) 00:08:43.169 7713.083 - 7763.495: 24.6433% ( 179) 00:08:43.169 7763.495 - 7813.908: 26.0488% ( 197) 00:08:43.169 7813.908 - 7864.320: 27.4543% ( 197) 00:08:43.169 7864.320 - 7914.732: 28.9669% ( 212) 00:08:43.169 7914.732 - 7965.145: 30.5936% ( 228) 00:08:43.169 7965.145 - 8015.557: 32.1989% ( 225) 00:08:43.169 8015.557 - 8065.969: 34.0539% ( 260) 00:08:43.169 8065.969 - 8116.382: 36.1301% ( 291) 00:08:43.169 8116.382 - 8166.794: 38.1350% ( 281) 00:08:43.169 8166.794 - 8217.206: 40.3539% ( 311) 00:08:43.169 8217.206 - 8267.618: 42.5942% ( 314) 00:08:43.169 8267.618 - 8318.031: 44.9986% ( 337) 00:08:43.169 8318.031 - 8368.443: 47.3459% ( 329) 00:08:43.169 8368.443 - 8418.855: 49.7003% ( 330) 00:08:43.169 8418.855 - 8469.268: 52.1119% ( 338) 00:08:43.169 8469.268 - 8519.680: 54.3094% ( 308) 00:08:43.169 8519.680 - 8570.092: 56.3998% ( 293) 00:08:43.169 8570.092 - 8620.505: 58.5688% ( 304) 00:08:43.169 8620.505 - 8670.917: 60.5665% ( 280) 00:08:43.169 8670.917 - 8721.329: 62.5214% ( 274) 00:08:43.169 8721.329 - 8771.742: 64.5405% ( 283) 00:08:43.169 8771.742 - 8822.154: 66.4740% ( 271) 00:08:43.169 8822.154 - 8872.566: 68.4646% ( 279) 00:08:43.169 8872.566 - 8922.978: 70.1555% ( 237) 00:08:43.169 8922.978 - 8973.391: 71.5468% ( 195) 00:08:43.169 8973.391 - 9023.803: 72.8739% ( 186) 00:08:43.169 9023.803 - 9074.215: 74.1010% ( 172) 00:08:43.169 9074.215 - 9124.628: 75.1213% ( 143) 00:08:43.169 9124.628 - 9175.040: 76.1558% ( 145) 00:08:43.169 9175.040 - 9225.452: 77.0905% ( 131) 00:08:43.169 9225.452 - 9275.865: 77.9181% ( 116) 00:08:43.169 9275.865 - 9326.277: 78.6744% ( 106) 00:08:43.169 9326.277 - 9376.689: 79.4378% ( 107) 00:08:43.169 9376.689 - 9427.102: 80.1227% ( 96) 00:08:43.169 9427.102 - 9477.514: 80.8076% ( 96) 00:08:43.169 9477.514 - 9527.926: 81.3856% ( 81) 00:08:43.169 9527.926 - 9578.338: 81.9207% ( 75) 00:08:43.169 9578.338 - 9628.751: 82.3773% ( 64) 00:08:43.169 9628.751 - 9679.163: 82.8553% ( 67) 00:08:43.169 9679.163 - 9729.575: 83.2763% ( 59) 00:08:43.169 9729.575 - 9779.988: 83.6687% ( 55) 00:08:43.169 9779.988 - 9830.400: 84.0539% ( 54) 00:08:43.169 9830.400 - 9880.812: 84.4107% ( 50) 00:08:43.169 9880.812 - 9931.225: 84.7246% ( 44) 00:08:43.169 9931.225 - 9981.637: 85.0599% ( 47) 00:08:43.169 9981.637 - 10032.049: 85.4309% ( 52) 00:08:43.169 10032.049 - 10082.462: 85.8091% ( 53) 00:08:43.169 10082.462 - 10132.874: 86.1444% ( 47) 00:08:43.169 10132.874 - 10183.286: 86.4441% ( 42) 00:08:43.169 10183.286 - 10233.698: 86.7437% ( 42) 00:08:43.169 10233.698 - 10284.111: 87.0933% ( 49) 00:08:43.169 10284.111 - 10334.523: 87.3930% ( 42) 00:08:43.169 10334.523 - 10384.935: 87.7212% ( 46) 00:08:43.169 10384.935 - 10435.348: 88.0565% ( 47) 00:08:43.169 10435.348 - 10485.760: 88.3633% ( 43) 00:08:43.169 10485.760 - 10536.172: 88.6487% ( 40) 00:08:43.169 10536.172 - 10586.585: 88.9555% ( 43) 00:08:43.169 10586.585 - 10636.997: 89.2195% ( 37) 00:08:43.169 10636.997 - 10687.409: 89.5049% ( 40) 00:08:43.169 10687.409 - 10737.822: 89.8045% ( 42) 00:08:43.169 10737.822 - 10788.234: 90.0899% ( 40) 00:08:43.169 10788.234 - 10838.646: 90.3753% ( 40) 00:08:43.169 10838.646 - 10889.058: 90.7035% ( 46) 00:08:43.169 10889.058 - 10939.471: 90.9675% ( 37) 00:08:43.169 10939.471 - 10989.883: 91.1672% ( 28) 00:08:43.169 10989.883 - 11040.295: 91.3670% ( 28) 00:08:43.169 11040.295 - 11090.708: 91.5811% ( 30) 00:08:43.169 11090.708 - 11141.120: 91.7594% ( 25) 00:08:43.169 11141.120 - 11191.532: 91.9307% ( 24) 00:08:43.169 11191.532 - 11241.945: 92.1019% ( 24) 00:08:43.169 11241.945 - 11292.357: 92.2731% ( 24) 00:08:43.169 11292.357 - 11342.769: 92.4158% ( 20) 00:08:43.169 11342.769 - 11393.182: 92.5656% ( 21) 00:08:43.169 11393.182 - 11443.594: 92.6941% ( 18) 00:08:43.169 11443.594 - 11494.006: 92.8082% ( 16) 00:08:43.169 11494.006 - 11544.418: 92.9295% ( 17) 00:08:43.169 11544.418 - 11594.831: 93.0508% ( 17) 00:08:43.169 11594.831 - 11645.243: 93.1650% ( 16) 00:08:43.169 11645.243 - 11695.655: 93.2862% ( 17) 00:08:43.169 11695.655 - 11746.068: 93.4004% ( 16) 00:08:43.169 11746.068 - 11796.480: 93.5146% ( 16) 00:08:43.169 11796.480 - 11846.892: 93.6073% ( 13) 00:08:43.169 11846.892 - 11897.305: 93.7286% ( 17) 00:08:43.169 11897.305 - 11947.717: 93.8356% ( 15) 00:08:43.169 11947.717 - 11998.129: 93.9426% ( 15) 00:08:43.169 11998.129 - 12048.542: 94.0354% ( 13) 00:08:43.169 12048.542 - 12098.954: 94.1210% ( 12) 00:08:43.169 12098.954 - 12149.366: 94.1995% ( 11) 00:08:43.169 12149.366 - 12199.778: 94.2708% ( 10) 00:08:43.169 12199.778 - 12250.191: 94.3422% ( 10) 00:08:43.169 12250.191 - 12300.603: 94.4278% ( 12) 00:08:43.169 12300.603 - 12351.015: 94.4920% ( 9) 00:08:43.169 12351.015 - 12401.428: 94.5705% ( 11) 00:08:43.169 12401.428 - 12451.840: 94.6347% ( 9) 00:08:43.169 12451.840 - 12502.252: 94.7132% ( 11) 00:08:43.170 12502.252 - 12552.665: 94.7774% ( 9) 00:08:43.170 12552.665 - 12603.077: 94.8059% ( 4) 00:08:43.170 12603.077 - 12653.489: 94.8273% ( 3) 00:08:43.170 12653.489 - 12703.902: 94.8559% ( 4) 00:08:43.170 12703.902 - 12754.314: 94.8773% ( 3) 00:08:43.170 12754.314 - 12804.726: 94.9058% ( 4) 00:08:43.170 12804.726 - 12855.138: 94.9344% ( 4) 00:08:43.170 12855.138 - 12905.551: 94.9558% ( 3) 00:08:43.170 12905.551 - 13006.375: 94.9772% ( 3) 00:08:43.170 13611.323 - 13712.148: 95.0057% ( 4) 00:08:43.170 13712.148 - 13812.972: 95.0271% ( 3) 00:08:43.170 13812.972 - 13913.797: 95.0557% ( 4) 00:08:43.170 13913.797 - 14014.622: 95.0842% ( 4) 00:08:43.170 14014.622 - 14115.446: 95.1127% ( 4) 00:08:43.170 14115.446 - 14216.271: 95.1341% ( 3) 00:08:43.170 14216.271 - 14317.095: 95.1627% ( 4) 00:08:43.170 14317.095 - 14417.920: 95.1841% ( 3) 00:08:43.170 14417.920 - 14518.745: 95.2055% ( 3) 00:08:43.170 14518.745 - 14619.569: 95.2340% ( 4) 00:08:43.170 14619.569 - 14720.394: 95.2554% ( 3) 00:08:43.170 14720.394 - 14821.218: 95.2768% ( 3) 00:08:43.170 14821.218 - 14922.043: 95.3054% ( 4) 00:08:43.170 14922.043 - 15022.868: 95.3268% ( 3) 00:08:43.170 15022.868 - 15123.692: 95.3553% ( 4) 00:08:43.170 15123.692 - 15224.517: 95.3767% ( 3) 00:08:43.170 15224.517 - 15325.342: 95.4053% ( 4) 00:08:43.170 15325.342 - 15426.166: 95.4267% ( 3) 00:08:43.170 15426.166 - 15526.991: 95.4338% ( 1) 00:08:43.170 15930.289 - 16031.114: 95.4552% ( 3) 00:08:43.170 16031.114 - 16131.938: 95.4695% ( 2) 00:08:43.170 16131.938 - 16232.763: 95.4909% ( 3) 00:08:43.170 16232.763 - 16333.588: 95.5693% ( 11) 00:08:43.170 16333.588 - 16434.412: 95.6336% ( 9) 00:08:43.170 16434.412 - 16535.237: 95.7691% ( 19) 00:08:43.170 16535.237 - 16636.062: 95.8904% ( 17) 00:08:43.170 16636.062 - 16736.886: 96.0331% ( 20) 00:08:43.170 16736.886 - 16837.711: 96.1687% ( 19) 00:08:43.170 16837.711 - 16938.535: 96.3256% ( 22) 00:08:43.170 16938.535 - 17039.360: 96.4541% ( 18) 00:08:43.170 17039.360 - 17140.185: 96.6039% ( 21) 00:08:43.170 17140.185 - 17241.009: 96.7394% ( 19) 00:08:43.170 17241.009 - 17341.834: 96.8536% ( 16) 00:08:43.170 17341.834 - 17442.658: 97.0034% ( 21) 00:08:43.170 17442.658 - 17543.483: 97.1390% ( 19) 00:08:43.170 17543.483 - 17644.308: 97.2603% ( 17) 00:08:43.170 17644.308 - 17745.132: 97.3887% ( 18) 00:08:43.170 17745.132 - 17845.957: 97.5100% ( 17) 00:08:43.170 17845.957 - 17946.782: 97.6598% ( 21) 00:08:43.170 17946.782 - 18047.606: 97.7811% ( 17) 00:08:43.170 18047.606 - 18148.431: 97.9095% ( 18) 00:08:43.170 18148.431 - 18249.255: 98.0451% ( 19) 00:08:43.170 18249.255 - 18350.080: 98.1807% ( 19) 00:08:43.170 18350.080 - 18450.905: 98.3019% ( 17) 00:08:43.170 18450.905 - 18551.729: 98.4446% ( 20) 00:08:43.170 18551.729 - 18652.554: 98.5445% ( 14) 00:08:43.170 18652.554 - 18753.378: 98.6658% ( 17) 00:08:43.170 18753.378 - 18854.203: 98.7728% ( 15) 00:08:43.170 18854.203 - 18955.028: 98.8941% ( 17) 00:08:43.170 18955.028 - 19055.852: 98.9869% ( 13) 00:08:43.170 19055.852 - 19156.677: 99.0511% ( 9) 00:08:43.170 19156.677 - 19257.502: 99.0796% ( 4) 00:08:43.170 19257.502 - 19358.326: 99.0868% ( 1) 00:08:43.170 29440.788 - 29642.437: 99.1153% ( 4) 00:08:43.170 29642.437 - 29844.086: 99.1367% ( 3) 00:08:43.170 29844.086 - 30045.735: 99.1652% ( 4) 00:08:43.170 30045.735 - 30247.385: 99.1938% ( 4) 00:08:43.170 30247.385 - 30449.034: 99.2152% ( 3) 00:08:43.170 30449.034 - 30650.683: 99.2437% ( 4) 00:08:43.170 30650.683 - 30852.332: 99.2651% ( 3) 00:08:43.170 30852.332 - 31053.982: 99.2937% ( 4) 00:08:43.170 31053.982 - 31255.631: 99.3222% ( 4) 00:08:43.170 31255.631 - 31457.280: 99.3436% ( 3) 00:08:43.170 31457.280 - 31658.929: 99.3721% ( 4) 00:08:43.170 31658.929 - 31860.578: 99.4007% ( 4) 00:08:43.170 31860.578 - 32062.228: 99.4221% ( 3) 00:08:43.170 32062.228 - 32263.877: 99.4506% ( 4) 00:08:43.170 32263.877 - 32465.526: 99.4720% ( 3) 00:08:43.170 32465.526 - 32667.175: 99.5006% ( 4) 00:08:43.170 32667.175 - 32868.825: 99.5220% ( 3) 00:08:43.170 32868.825 - 33070.474: 99.5434% ( 3) 00:08:43.170 40531.495 - 40733.145: 99.5576% ( 2) 00:08:43.170 40733.145 - 40934.794: 99.5862% ( 4) 00:08:43.170 40934.794 - 41136.443: 99.6076% ( 3) 00:08:43.170 41136.443 - 41338.092: 99.6361% ( 4) 00:08:43.170 41338.092 - 41539.742: 99.6647% ( 4) 00:08:43.170 41539.742 - 41741.391: 99.6861% ( 3) 00:08:43.170 41741.391 - 41943.040: 99.7146% ( 4) 00:08:43.170 41943.040 - 42144.689: 99.7360% ( 3) 00:08:43.170 42144.689 - 42346.338: 99.7646% ( 4) 00:08:43.170 42346.338 - 42547.988: 99.7931% ( 4) 00:08:43.170 42547.988 - 42749.637: 99.8145% ( 3) 00:08:43.170 42749.637 - 42951.286: 99.8359% ( 3) 00:08:43.170 42951.286 - 43152.935: 99.8644% ( 4) 00:08:43.170 43152.935 - 43354.585: 99.8787% ( 2) 00:08:43.170 43354.585 - 43556.234: 99.9072% ( 4) 00:08:43.170 43556.234 - 43757.883: 99.9287% ( 3) 00:08:43.170 43757.883 - 43959.532: 99.9572% ( 4) 00:08:43.170 43959.532 - 44161.182: 99.9857% ( 4) 00:08:43.170 44161.182 - 44362.831: 100.0000% ( 2) 00:08:43.170 00:08:43.170 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:43.170 ============================================================================== 00:08:43.170 Range in us Cumulative IO count 00:08:43.170 6125.095 - 6150.302: 0.0856% ( 12) 00:08:43.170 6150.302 - 6175.508: 0.2283% ( 20) 00:08:43.170 6175.508 - 6200.714: 0.3924% ( 23) 00:08:43.170 6200.714 - 6225.920: 0.6849% ( 41) 00:08:43.170 6225.920 - 6251.126: 1.0060% ( 45) 00:08:43.170 6251.126 - 6276.332: 1.3128% ( 43) 00:08:43.170 6276.332 - 6301.538: 1.5839% ( 38) 00:08:43.170 6301.538 - 6326.745: 1.8193% ( 33) 00:08:43.170 6326.745 - 6351.951: 2.1261% ( 43) 00:08:43.170 6351.951 - 6377.157: 2.3973% ( 38) 00:08:43.170 6377.157 - 6402.363: 2.6684% ( 38) 00:08:43.170 6402.363 - 6427.569: 2.9680% ( 42) 00:08:43.170 6427.569 - 6452.775: 3.2748% ( 43) 00:08:43.170 6452.775 - 6503.188: 3.8955% ( 87) 00:08:43.170 6503.188 - 6553.600: 4.5448% ( 91) 00:08:43.170 6553.600 - 6604.012: 5.2012% ( 92) 00:08:43.170 6604.012 - 6654.425: 5.9004% ( 98) 00:08:43.170 6654.425 - 6704.837: 6.5782% ( 95) 00:08:43.170 6704.837 - 6755.249: 7.3345% ( 106) 00:08:43.170 6755.249 - 6805.662: 8.0622% ( 102) 00:08:43.170 6805.662 - 6856.074: 8.8898% ( 116) 00:08:43.170 6856.074 - 6906.486: 9.7745% ( 124) 00:08:43.170 6906.486 - 6956.898: 10.6022% ( 116) 00:08:43.170 6956.898 - 7007.311: 11.4013% ( 112) 00:08:43.170 7007.311 - 7057.723: 12.0862% ( 96) 00:08:43.170 7057.723 - 7108.135: 12.6855% ( 84) 00:08:43.170 7108.135 - 7158.548: 13.3134% ( 88) 00:08:43.170 7158.548 - 7208.960: 14.0054% ( 97) 00:08:43.170 7208.960 - 7259.372: 14.6404% ( 89) 00:08:43.170 7259.372 - 7309.785: 15.3824% ( 104) 00:08:43.170 7309.785 - 7360.197: 16.1601% ( 109) 00:08:43.170 7360.197 - 7410.609: 16.9164% ( 106) 00:08:43.170 7410.609 - 7461.022: 17.8510% ( 131) 00:08:43.170 7461.022 - 7511.434: 18.7500% ( 126) 00:08:43.170 7511.434 - 7561.846: 19.6846% ( 131) 00:08:43.170 7561.846 - 7612.258: 20.6621% ( 137) 00:08:43.170 7612.258 - 7662.671: 21.8679% ( 169) 00:08:43.170 7662.671 - 7713.083: 23.2092% ( 188) 00:08:43.170 7713.083 - 7763.495: 24.4863% ( 179) 00:08:43.170 7763.495 - 7813.908: 25.9275% ( 202) 00:08:43.170 7813.908 - 7864.320: 27.4258% ( 210) 00:08:43.170 7864.320 - 7914.732: 29.0525% ( 228) 00:08:43.170 7914.732 - 7965.145: 30.7934% ( 244) 00:08:43.170 7965.145 - 8015.557: 32.6983% ( 267) 00:08:43.170 8015.557 - 8065.969: 34.7531% ( 288) 00:08:43.170 8065.969 - 8116.382: 36.9007% ( 301) 00:08:43.170 8116.382 - 8166.794: 39.0411% ( 300) 00:08:43.170 8166.794 - 8217.206: 41.0103% ( 276) 00:08:43.170 8217.206 - 8267.618: 43.0508% ( 286) 00:08:43.170 8267.618 - 8318.031: 45.3054% ( 316) 00:08:43.170 8318.031 - 8368.443: 47.7811% ( 347) 00:08:43.170 8368.443 - 8418.855: 50.0571% ( 319) 00:08:43.170 8418.855 - 8469.268: 52.2760% ( 311) 00:08:43.170 8469.268 - 8519.680: 54.3593% ( 292) 00:08:43.170 8519.680 - 8570.092: 56.2928% ( 271) 00:08:43.170 8570.092 - 8620.505: 58.3405% ( 287) 00:08:43.170 8620.505 - 8670.917: 60.3739% ( 285) 00:08:43.170 8670.917 - 8721.329: 62.2788% ( 267) 00:08:43.170 8721.329 - 8771.742: 64.1695% ( 265) 00:08:43.170 8771.742 - 8822.154: 65.9461% ( 249) 00:08:43.170 8822.154 - 8872.566: 67.5014% ( 218) 00:08:43.170 8872.566 - 8922.978: 69.0211% ( 213) 00:08:43.170 8922.978 - 8973.391: 70.5123% ( 209) 00:08:43.170 8973.391 - 9023.803: 71.8750% ( 191) 00:08:43.170 9023.803 - 9074.215: 73.1236% ( 175) 00:08:43.170 9074.215 - 9124.628: 74.1367% ( 142) 00:08:43.170 9124.628 - 9175.040: 75.2140% ( 151) 00:08:43.170 9175.040 - 9225.452: 76.1201% ( 127) 00:08:43.170 9225.452 - 9275.865: 77.0477% ( 130) 00:08:43.170 9275.865 - 9326.277: 78.0608% ( 142) 00:08:43.170 9326.277 - 9376.689: 78.9384% ( 123) 00:08:43.170 9376.689 - 9427.102: 79.8445% ( 127) 00:08:43.170 9427.102 - 9477.514: 80.6578% ( 114) 00:08:43.170 9477.514 - 9527.926: 81.3570% ( 98) 00:08:43.170 9527.926 - 9578.338: 81.9920% ( 89) 00:08:43.170 9578.338 - 9628.751: 82.6413% ( 91) 00:08:43.170 9628.751 - 9679.163: 83.2120% ( 80) 00:08:43.170 9679.163 - 9729.575: 83.7257% ( 72) 00:08:43.170 9729.575 - 9779.988: 84.1396% ( 58) 00:08:43.170 9779.988 - 9830.400: 84.5248% ( 54) 00:08:43.170 9830.400 - 9880.812: 84.9386% ( 58) 00:08:43.170 9880.812 - 9931.225: 85.3810% ( 62) 00:08:43.170 9931.225 - 9981.637: 85.7734% ( 55) 00:08:43.170 9981.637 - 10032.049: 86.1729% ( 56) 00:08:43.170 10032.049 - 10082.462: 86.6082% ( 61) 00:08:43.170 10082.462 - 10132.874: 87.0291% ( 59) 00:08:43.170 10132.874 - 10183.286: 87.3430% ( 44) 00:08:43.171 10183.286 - 10233.698: 87.6641% ( 45) 00:08:43.171 10233.698 - 10284.111: 87.9780% ( 44) 00:08:43.171 10284.111 - 10334.523: 88.3062% ( 46) 00:08:43.171 10334.523 - 10384.935: 88.6059% ( 42) 00:08:43.171 10384.935 - 10435.348: 88.9269% ( 45) 00:08:43.171 10435.348 - 10485.760: 89.2623% ( 47) 00:08:43.171 10485.760 - 10536.172: 89.5762% ( 44) 00:08:43.171 10536.172 - 10586.585: 89.8687% ( 41) 00:08:43.171 10586.585 - 10636.997: 90.1684% ( 42) 00:08:43.171 10636.997 - 10687.409: 90.4680% ( 42) 00:08:43.171 10687.409 - 10737.822: 90.6892% ( 31) 00:08:43.171 10737.822 - 10788.234: 90.8890% ( 28) 00:08:43.171 10788.234 - 10838.646: 91.0103% ( 17) 00:08:43.171 10838.646 - 10889.058: 91.1530% ( 20) 00:08:43.171 10889.058 - 10939.471: 91.2671% ( 16) 00:08:43.171 10939.471 - 10989.883: 91.3385% ( 10) 00:08:43.171 10989.883 - 11040.295: 91.4384% ( 14) 00:08:43.171 11040.295 - 11090.708: 91.5311% ( 13) 00:08:43.171 11090.708 - 11141.120: 91.6524% ( 17) 00:08:43.171 11141.120 - 11191.532: 91.7666% ( 16) 00:08:43.171 11191.532 - 11241.945: 91.9164% ( 21) 00:08:43.171 11241.945 - 11292.357: 92.0805% ( 23) 00:08:43.171 11292.357 - 11342.769: 92.2803% ( 28) 00:08:43.171 11342.769 - 11393.182: 92.4229% ( 20) 00:08:43.171 11393.182 - 11443.594: 92.5656% ( 20) 00:08:43.171 11443.594 - 11494.006: 92.6798% ( 16) 00:08:43.171 11494.006 - 11544.418: 92.8225% ( 20) 00:08:43.171 11544.418 - 11594.831: 92.9723% ( 21) 00:08:43.171 11594.831 - 11645.243: 93.1079% ( 19) 00:08:43.171 11645.243 - 11695.655: 93.2220% ( 16) 00:08:43.171 11695.655 - 11746.068: 93.3219% ( 14) 00:08:43.171 11746.068 - 11796.480: 93.4289% ( 15) 00:08:43.171 11796.480 - 11846.892: 93.5431% ( 16) 00:08:43.171 11846.892 - 11897.305: 93.6715% ( 18) 00:08:43.171 11897.305 - 11947.717: 93.7928% ( 17) 00:08:43.171 11947.717 - 11998.129: 93.9212% ( 18) 00:08:43.171 11998.129 - 12048.542: 94.0425% ( 17) 00:08:43.171 12048.542 - 12098.954: 94.1495% ( 15) 00:08:43.171 12098.954 - 12149.366: 94.1924% ( 6) 00:08:43.171 12149.366 - 12199.778: 94.2708% ( 11) 00:08:43.171 12199.778 - 12250.191: 94.3564% ( 12) 00:08:43.171 12250.191 - 12300.603: 94.4278% ( 10) 00:08:43.171 12300.603 - 12351.015: 94.4706% ( 6) 00:08:43.171 12351.015 - 12401.428: 94.5063% ( 5) 00:08:43.171 12401.428 - 12451.840: 94.5634% ( 8) 00:08:43.171 12451.840 - 12502.252: 94.6062% ( 6) 00:08:43.171 12502.252 - 12552.665: 94.6561% ( 7) 00:08:43.171 12552.665 - 12603.077: 94.7061% ( 7) 00:08:43.171 12603.077 - 12653.489: 94.7560% ( 7) 00:08:43.171 12653.489 - 12703.902: 94.8059% ( 7) 00:08:43.171 12703.902 - 12754.314: 94.8487% ( 6) 00:08:43.171 12754.314 - 12804.726: 94.8701% ( 3) 00:08:43.171 12804.726 - 12855.138: 94.9058% ( 5) 00:08:43.171 12855.138 - 12905.551: 94.9486% ( 6) 00:08:43.171 12905.551 - 13006.375: 95.0128% ( 9) 00:08:43.171 13006.375 - 13107.200: 95.0557% ( 6) 00:08:43.171 13107.200 - 13208.025: 95.0842% ( 4) 00:08:43.171 13208.025 - 13308.849: 95.1056% ( 3) 00:08:43.171 13308.849 - 13409.674: 95.1341% ( 4) 00:08:43.171 13409.674 - 13510.498: 95.1555% ( 3) 00:08:43.171 13510.498 - 13611.323: 95.1841% ( 4) 00:08:43.171 13611.323 - 13712.148: 95.2055% ( 3) 00:08:43.171 13712.148 - 13812.972: 95.2340% ( 4) 00:08:43.171 13812.972 - 13913.797: 95.2554% ( 3) 00:08:43.171 13913.797 - 14014.622: 95.2840% ( 4) 00:08:43.171 14014.622 - 14115.446: 95.3054% ( 3) 00:08:43.171 14115.446 - 14216.271: 95.3339% ( 4) 00:08:43.171 14216.271 - 14317.095: 95.3553% ( 3) 00:08:43.171 14317.095 - 14417.920: 95.3838% ( 4) 00:08:43.171 14417.920 - 14518.745: 95.4124% ( 4) 00:08:43.171 14518.745 - 14619.569: 95.4338% ( 3) 00:08:43.171 15728.640 - 15829.465: 95.4623% ( 4) 00:08:43.171 15829.465 - 15930.289: 95.4695% ( 1) 00:08:43.171 15930.289 - 16031.114: 95.4837% ( 2) 00:08:43.171 16031.114 - 16131.938: 95.5051% ( 3) 00:08:43.171 16131.938 - 16232.763: 95.5194% ( 2) 00:08:43.171 16232.763 - 16333.588: 95.5765% ( 8) 00:08:43.171 16333.588 - 16434.412: 95.6764% ( 14) 00:08:43.171 16434.412 - 16535.237: 95.8191% ( 20) 00:08:43.171 16535.237 - 16636.062: 95.9760% ( 22) 00:08:43.171 16636.062 - 16736.886: 96.0830% ( 15) 00:08:43.171 16736.886 - 16837.711: 96.2400% ( 22) 00:08:43.171 16837.711 - 16938.535: 96.3613% ( 17) 00:08:43.171 16938.535 - 17039.360: 96.5040% ( 20) 00:08:43.171 17039.360 - 17140.185: 96.6467% ( 20) 00:08:43.171 17140.185 - 17241.009: 96.7680% ( 17) 00:08:43.171 17241.009 - 17341.834: 96.8964% ( 18) 00:08:43.171 17341.834 - 17442.658: 97.0320% ( 19) 00:08:43.171 17442.658 - 17543.483: 97.1675% ( 19) 00:08:43.171 17543.483 - 17644.308: 97.2888% ( 17) 00:08:43.171 17644.308 - 17745.132: 97.4172% ( 18) 00:08:43.171 17745.132 - 17845.957: 97.5671% ( 21) 00:08:43.171 17845.957 - 17946.782: 97.6884% ( 17) 00:08:43.171 17946.782 - 18047.606: 97.8239% ( 19) 00:08:43.171 18047.606 - 18148.431: 97.9595% ( 19) 00:08:43.171 18148.431 - 18249.255: 98.0879% ( 18) 00:08:43.171 18249.255 - 18350.080: 98.2092% ( 17) 00:08:43.171 18350.080 - 18450.905: 98.3162% ( 15) 00:08:43.171 18450.905 - 18551.729: 98.4446% ( 18) 00:08:43.171 18551.729 - 18652.554: 98.5588% ( 16) 00:08:43.171 18652.554 - 18753.378: 98.6658% ( 15) 00:08:43.171 18753.378 - 18854.203: 98.7871% ( 17) 00:08:43.171 18854.203 - 18955.028: 98.9013% ( 16) 00:08:43.171 18955.028 - 19055.852: 99.0011% ( 14) 00:08:43.171 19055.852 - 19156.677: 99.0511% ( 7) 00:08:43.171 19156.677 - 19257.502: 99.0796% ( 4) 00:08:43.171 19257.502 - 19358.326: 99.0868% ( 1) 00:08:43.171 26416.049 - 26617.698: 99.1010% ( 2) 00:08:43.171 26617.698 - 26819.348: 99.1224% ( 3) 00:08:43.171 26819.348 - 27020.997: 99.1438% ( 3) 00:08:43.171 27020.997 - 27222.646: 99.1724% ( 4) 00:08:43.171 27222.646 - 27424.295: 99.2009% ( 4) 00:08:43.171 27424.295 - 27625.945: 99.2223% ( 3) 00:08:43.171 27625.945 - 27827.594: 99.2509% ( 4) 00:08:43.171 27827.594 - 28029.243: 99.2723% ( 3) 00:08:43.171 28029.243 - 28230.892: 99.3008% ( 4) 00:08:43.171 28230.892 - 28432.542: 99.3222% ( 3) 00:08:43.171 28432.542 - 28634.191: 99.3507% ( 4) 00:08:43.171 28634.191 - 28835.840: 99.3793% ( 4) 00:08:43.171 28835.840 - 29037.489: 99.4007% ( 3) 00:08:43.171 29037.489 - 29239.138: 99.4292% ( 4) 00:08:43.171 29239.138 - 29440.788: 99.4506% ( 3) 00:08:43.171 29440.788 - 29642.437: 99.4792% ( 4) 00:08:43.171 29642.437 - 29844.086: 99.5006% ( 3) 00:08:43.171 29844.086 - 30045.735: 99.5291% ( 4) 00:08:43.171 30045.735 - 30247.385: 99.5434% ( 2) 00:08:43.171 37708.406 - 37910.055: 99.5648% ( 3) 00:08:43.171 37910.055 - 38111.705: 99.5933% ( 4) 00:08:43.171 38111.705 - 38313.354: 99.6219% ( 4) 00:08:43.171 38313.354 - 38515.003: 99.6433% ( 3) 00:08:43.171 38515.003 - 38716.652: 99.6718% ( 4) 00:08:43.171 38716.652 - 38918.302: 99.6932% ( 3) 00:08:43.171 38918.302 - 39119.951: 99.7217% ( 4) 00:08:43.171 39119.951 - 39321.600: 99.7503% ( 4) 00:08:43.171 39321.600 - 39523.249: 99.7717% ( 3) 00:08:43.171 39523.249 - 39724.898: 99.8002% ( 4) 00:08:43.171 39724.898 - 39926.548: 99.8288% ( 4) 00:08:43.171 39926.548 - 40128.197: 99.8502% ( 3) 00:08:43.171 40128.197 - 40329.846: 99.8787% ( 4) 00:08:43.171 40329.846 - 40531.495: 99.9001% ( 3) 00:08:43.171 40531.495 - 40733.145: 99.9287% ( 4) 00:08:43.171 40733.145 - 40934.794: 99.9572% ( 4) 00:08:43.171 40934.794 - 41136.443: 99.9786% ( 3) 00:08:43.171 41136.443 - 41338.092: 100.0000% ( 3) 00:08:43.171 00:08:43.171 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:43.171 ============================================================================== 00:08:43.171 Range in us Cumulative IO count 00:08:43.171 6099.889 - 6125.095: 0.0710% ( 10) 00:08:43.171 6125.095 - 6150.302: 0.1491% ( 11) 00:08:43.171 6150.302 - 6175.508: 0.2415% ( 13) 00:08:43.171 6175.508 - 6200.714: 0.4759% ( 33) 00:08:43.171 6200.714 - 6225.920: 0.6889% ( 30) 00:08:43.171 6225.920 - 6251.126: 0.9730% ( 40) 00:08:43.171 6251.126 - 6276.332: 1.1861% ( 30) 00:08:43.171 6276.332 - 6301.538: 1.4276% ( 34) 00:08:43.171 6301.538 - 6326.745: 1.7827% ( 50) 00:08:43.171 6326.745 - 6351.951: 2.1378% ( 50) 00:08:43.171 6351.951 - 6377.157: 2.4006% ( 37) 00:08:43.171 6377.157 - 6402.363: 2.7131% ( 44) 00:08:43.171 6402.363 - 6427.569: 2.9759% ( 37) 00:08:43.171 6427.569 - 6452.775: 3.2315% ( 36) 00:08:43.171 6452.775 - 6503.188: 3.9205% ( 97) 00:08:43.171 6503.188 - 6553.600: 4.5312% ( 86) 00:08:43.171 6553.600 - 6604.012: 5.2912% ( 107) 00:08:43.171 6604.012 - 6654.425: 6.1009% ( 114) 00:08:43.171 6654.425 - 6704.837: 6.8182% ( 101) 00:08:43.171 6704.837 - 6755.249: 7.4929% ( 95) 00:08:43.171 6755.249 - 6805.662: 8.2173% ( 102) 00:08:43.171 6805.662 - 6856.074: 8.9347% ( 101) 00:08:43.171 6856.074 - 6906.486: 9.8082% ( 123) 00:08:43.171 6906.486 - 6956.898: 10.6321% ( 116) 00:08:43.171 6956.898 - 7007.311: 11.3849% ( 106) 00:08:43.171 7007.311 - 7057.723: 12.0952% ( 100) 00:08:43.171 7057.723 - 7108.135: 12.7628% ( 94) 00:08:43.171 7108.135 - 7158.548: 13.2955% ( 75) 00:08:43.171 7158.548 - 7208.960: 13.7926% ( 70) 00:08:43.171 7208.960 - 7259.372: 14.2898% ( 70) 00:08:43.171 7259.372 - 7309.785: 14.8864% ( 84) 00:08:43.171 7309.785 - 7360.197: 15.5824% ( 98) 00:08:43.171 7360.197 - 7410.609: 16.2784% ( 98) 00:08:43.171 7410.609 - 7461.022: 17.1165% ( 118) 00:08:43.171 7461.022 - 7511.434: 17.9119% ( 112) 00:08:43.171 7511.434 - 7561.846: 18.8707% ( 135) 00:08:43.171 7561.846 - 7612.258: 19.8864% ( 143) 00:08:43.171 7612.258 - 7662.671: 21.0156% ( 159) 00:08:43.171 7662.671 - 7713.083: 22.4503% ( 202) 00:08:43.171 7713.083 - 7763.495: 23.9205% ( 207) 00:08:43.171 7763.495 - 7813.908: 25.2486% ( 187) 00:08:43.172 7813.908 - 7864.320: 26.8395% ( 224) 00:08:43.172 7864.320 - 7914.732: 28.5511% ( 241) 00:08:43.172 7914.732 - 7965.145: 30.3622% ( 255) 00:08:43.172 7965.145 - 8015.557: 32.2798% ( 270) 00:08:43.172 8015.557 - 8065.969: 34.2330% ( 275) 00:08:43.172 8065.969 - 8116.382: 36.3281% ( 295) 00:08:43.172 8116.382 - 8166.794: 38.4517% ( 299) 00:08:43.172 8166.794 - 8217.206: 40.5753% ( 299) 00:08:43.172 8217.206 - 8267.618: 42.7273% ( 303) 00:08:43.172 8267.618 - 8318.031: 45.1278% ( 338) 00:08:43.172 8318.031 - 8368.443: 47.4645% ( 329) 00:08:43.172 8368.443 - 8418.855: 49.9503% ( 350) 00:08:43.172 8418.855 - 8469.268: 52.1946% ( 316) 00:08:43.172 8469.268 - 8519.680: 54.1903% ( 281) 00:08:43.172 8519.680 - 8570.092: 56.3139% ( 299) 00:08:43.172 8570.092 - 8620.505: 58.2955% ( 279) 00:08:43.172 8620.505 - 8670.917: 60.2486% ( 275) 00:08:43.172 8670.917 - 8721.329: 62.2443% ( 281) 00:08:43.172 8721.329 - 8771.742: 64.1477% ( 268) 00:08:43.172 8771.742 - 8822.154: 65.9375% ( 252) 00:08:43.172 8822.154 - 8872.566: 67.5639% ( 229) 00:08:43.172 8872.566 - 8922.978: 68.9915% ( 201) 00:08:43.172 8922.978 - 8973.391: 70.4545% ( 206) 00:08:43.172 8973.391 - 9023.803: 71.8395% ( 195) 00:08:43.172 9023.803 - 9074.215: 73.0256% ( 167) 00:08:43.172 9074.215 - 9124.628: 74.0909% ( 150) 00:08:43.172 9124.628 - 9175.040: 75.0071% ( 129) 00:08:43.172 9175.040 - 9225.452: 75.9091% ( 127) 00:08:43.172 9225.452 - 9275.865: 76.7188% ( 114) 00:08:43.172 9275.865 - 9326.277: 77.5852% ( 122) 00:08:43.172 9326.277 - 9376.689: 78.4446% ( 121) 00:08:43.172 9376.689 - 9427.102: 79.4034% ( 135) 00:08:43.172 9427.102 - 9477.514: 80.1065% ( 99) 00:08:43.172 9477.514 - 9527.926: 80.8097% ( 99) 00:08:43.172 9527.926 - 9578.338: 81.4062% ( 84) 00:08:43.172 9578.338 - 9628.751: 82.0312% ( 88) 00:08:43.172 9628.751 - 9679.163: 82.5213% ( 69) 00:08:43.172 9679.163 - 9729.575: 83.0824% ( 79) 00:08:43.172 9729.575 - 9779.988: 83.6364% ( 78) 00:08:43.172 9779.988 - 9830.400: 84.1903% ( 78) 00:08:43.172 9830.400 - 9880.812: 84.7727% ( 82) 00:08:43.172 9880.812 - 9931.225: 85.3125% ( 76) 00:08:43.172 9931.225 - 9981.637: 85.8665% ( 78) 00:08:43.172 9981.637 - 10032.049: 86.4205% ( 78) 00:08:43.172 10032.049 - 10082.462: 86.8608% ( 62) 00:08:43.172 10082.462 - 10132.874: 87.2514% ( 55) 00:08:43.172 10132.874 - 10183.286: 87.6705% ( 59) 00:08:43.172 10183.286 - 10233.698: 88.0398% ( 52) 00:08:43.172 10233.698 - 10284.111: 88.2955% ( 36) 00:08:43.172 10284.111 - 10334.523: 88.5724% ( 39) 00:08:43.172 10334.523 - 10384.935: 88.8068% ( 33) 00:08:43.172 10384.935 - 10435.348: 89.0270% ( 31) 00:08:43.172 10435.348 - 10485.760: 89.2045% ( 25) 00:08:43.172 10485.760 - 10536.172: 89.3466% ( 20) 00:08:43.172 10536.172 - 10586.585: 89.4957% ( 21) 00:08:43.172 10586.585 - 10636.997: 89.6307% ( 19) 00:08:43.172 10636.997 - 10687.409: 89.7514% ( 17) 00:08:43.172 10687.409 - 10737.822: 89.8935% ( 20) 00:08:43.172 10737.822 - 10788.234: 90.0497% ( 22) 00:08:43.172 10788.234 - 10838.646: 90.2486% ( 28) 00:08:43.172 10838.646 - 10889.058: 90.3906% ( 20) 00:08:43.172 10889.058 - 10939.471: 90.5327% ( 20) 00:08:43.172 10939.471 - 10989.883: 90.7102% ( 25) 00:08:43.172 10989.883 - 11040.295: 90.8736% ( 23) 00:08:43.172 11040.295 - 11090.708: 91.0582% ( 26) 00:08:43.172 11090.708 - 11141.120: 91.1861% ( 18) 00:08:43.172 11141.120 - 11191.532: 91.3423% ( 22) 00:08:43.172 11191.532 - 11241.945: 91.4844% ( 20) 00:08:43.172 11241.945 - 11292.357: 91.6406% ( 22) 00:08:43.172 11292.357 - 11342.769: 91.7827% ( 20) 00:08:43.172 11342.769 - 11393.182: 91.9673% ( 26) 00:08:43.172 11393.182 - 11443.594: 92.1378% ( 24) 00:08:43.172 11443.594 - 11494.006: 92.3153% ( 25) 00:08:43.172 11494.006 - 11544.418: 92.5426% ( 32) 00:08:43.172 11544.418 - 11594.831: 92.7131% ( 24) 00:08:43.172 11594.831 - 11645.243: 92.8835% ( 24) 00:08:43.172 11645.243 - 11695.655: 93.0398% ( 22) 00:08:43.172 11695.655 - 11746.068: 93.1534% ( 16) 00:08:43.172 11746.068 - 11796.480: 93.2599% ( 15) 00:08:43.172 11796.480 - 11846.892: 93.3452% ( 12) 00:08:43.172 11846.892 - 11897.305: 93.4517% ( 15) 00:08:43.172 11897.305 - 11947.717: 93.5440% ( 13) 00:08:43.172 11947.717 - 11998.129: 93.6506% ( 15) 00:08:43.172 11998.129 - 12048.542: 93.7855% ( 19) 00:08:43.172 12048.542 - 12098.954: 93.8920% ( 15) 00:08:43.172 12098.954 - 12149.366: 94.0199% ( 18) 00:08:43.172 12149.366 - 12199.778: 94.1619% ( 20) 00:08:43.172 12199.778 - 12250.191: 94.2614% ( 14) 00:08:43.172 12250.191 - 12300.603: 94.3324% ( 10) 00:08:43.172 12300.603 - 12351.015: 94.3963% ( 9) 00:08:43.172 12351.015 - 12401.428: 94.4531% ( 8) 00:08:43.172 12401.428 - 12451.840: 94.4886% ( 5) 00:08:43.172 12451.840 - 12502.252: 94.5099% ( 3) 00:08:43.172 12502.252 - 12552.665: 94.5526% ( 6) 00:08:43.172 12552.665 - 12603.077: 94.5810% ( 4) 00:08:43.172 12603.077 - 12653.489: 94.6165% ( 5) 00:08:43.172 12653.489 - 12703.902: 94.6449% ( 4) 00:08:43.172 12703.902 - 12754.314: 94.6804% ( 5) 00:08:43.172 12754.314 - 12804.726: 94.7159% ( 5) 00:08:43.172 12804.726 - 12855.138: 94.7514% ( 5) 00:08:43.172 12855.138 - 12905.551: 94.7656% ( 2) 00:08:43.172 12905.551 - 13006.375: 94.7869% ( 3) 00:08:43.172 13006.375 - 13107.200: 94.8153% ( 4) 00:08:43.172 13107.200 - 13208.025: 94.8366% ( 3) 00:08:43.172 13208.025 - 13308.849: 94.8651% ( 4) 00:08:43.172 13308.849 - 13409.674: 94.8864% ( 3) 00:08:43.172 13409.674 - 13510.498: 94.9148% ( 4) 00:08:43.172 13510.498 - 13611.323: 94.9361% ( 3) 00:08:43.172 13611.323 - 13712.148: 94.9645% ( 4) 00:08:43.172 13712.148 - 13812.972: 94.9929% ( 4) 00:08:43.172 13812.972 - 13913.797: 95.0000% ( 1) 00:08:43.172 14619.569 - 14720.394: 95.0142% ( 2) 00:08:43.172 14720.394 - 14821.218: 95.0213% ( 1) 00:08:43.172 14821.218 - 14922.043: 95.0355% ( 2) 00:08:43.172 14922.043 - 15022.868: 95.0497% ( 2) 00:08:43.172 15022.868 - 15123.692: 95.0568% ( 1) 00:08:43.172 15123.692 - 15224.517: 95.0710% ( 2) 00:08:43.172 15224.517 - 15325.342: 95.0852% ( 2) 00:08:43.172 15325.342 - 15426.166: 95.0994% ( 2) 00:08:43.172 15426.166 - 15526.991: 95.1136% ( 2) 00:08:43.172 15526.991 - 15627.815: 95.1207% ( 1) 00:08:43.172 15627.815 - 15728.640: 95.1349% ( 2) 00:08:43.172 15728.640 - 15829.465: 95.2202% ( 12) 00:08:43.172 15829.465 - 15930.289: 95.2557% ( 5) 00:08:43.172 15930.289 - 16031.114: 95.2912% ( 5) 00:08:43.172 16031.114 - 16131.938: 95.3054% ( 2) 00:08:43.172 16131.938 - 16232.763: 95.3196% ( 2) 00:08:43.172 16232.763 - 16333.588: 95.3693% ( 7) 00:08:43.172 16333.588 - 16434.412: 95.5043% ( 19) 00:08:43.172 16434.412 - 16535.237: 95.6108% ( 15) 00:08:43.172 16535.237 - 16636.062: 95.8026% ( 27) 00:08:43.172 16636.062 - 16736.886: 95.9446% ( 20) 00:08:43.172 16736.886 - 16837.711: 96.1080% ( 23) 00:08:43.172 16837.711 - 16938.535: 96.3068% ( 28) 00:08:43.172 16938.535 - 17039.360: 96.4844% ( 25) 00:08:43.172 17039.360 - 17140.185: 96.6548% ( 24) 00:08:43.172 17140.185 - 17241.009: 96.8253% ( 24) 00:08:43.172 17241.009 - 17341.834: 96.9957% ( 24) 00:08:43.172 17341.834 - 17442.658: 97.1449% ( 21) 00:08:43.172 17442.658 - 17543.483: 97.3224% ( 25) 00:08:43.172 17543.483 - 17644.308: 97.4787% ( 22) 00:08:43.172 17644.308 - 17745.132: 97.6349% ( 22) 00:08:43.172 17745.132 - 17845.957: 97.7912% ( 22) 00:08:43.172 17845.957 - 17946.782: 97.9759% ( 26) 00:08:43.172 17946.782 - 18047.606: 98.1321% ( 22) 00:08:43.172 18047.606 - 18148.431: 98.2812% ( 21) 00:08:43.172 18148.431 - 18249.255: 98.4162% ( 19) 00:08:43.172 18249.255 - 18350.080: 98.5369% ( 17) 00:08:43.172 18350.080 - 18450.905: 98.6790% ( 20) 00:08:43.172 18450.905 - 18551.729: 98.7997% ( 17) 00:08:43.172 18551.729 - 18652.554: 98.9347% ( 19) 00:08:43.172 18652.554 - 18753.378: 99.0625% ( 18) 00:08:43.172 18753.378 - 18854.203: 99.1903% ( 18) 00:08:43.172 18854.203 - 18955.028: 99.3253% ( 19) 00:08:43.172 18955.028 - 19055.852: 99.4460% ( 17) 00:08:43.172 19055.852 - 19156.677: 99.4957% ( 7) 00:08:43.172 19156.677 - 19257.502: 99.5241% ( 4) 00:08:43.172 19257.502 - 19358.326: 99.5455% ( 3) 00:08:43.172 25710.277 - 25811.102: 99.5526% ( 1) 00:08:43.172 25811.102 - 26012.751: 99.5739% ( 3) 00:08:43.172 26012.751 - 26214.400: 99.5952% ( 3) 00:08:43.172 26214.400 - 26416.049: 99.6236% ( 4) 00:08:43.172 26416.049 - 26617.698: 99.6520% ( 4) 00:08:43.172 26617.698 - 26819.348: 99.6733% ( 3) 00:08:43.172 26819.348 - 27020.997: 99.7017% ( 4) 00:08:43.172 27020.997 - 27222.646: 99.7230% ( 3) 00:08:43.172 27222.646 - 27424.295: 99.7514% ( 4) 00:08:43.172 27424.295 - 27625.945: 99.7727% ( 3) 00:08:43.172 27625.945 - 27827.594: 99.8011% ( 4) 00:08:43.172 27827.594 - 28029.243: 99.8224% ( 3) 00:08:43.172 28029.243 - 28230.892: 99.8509% ( 4) 00:08:43.172 28230.892 - 28432.542: 99.8722% ( 3) 00:08:43.172 28432.542 - 28634.191: 99.9006% ( 4) 00:08:43.172 28634.191 - 28835.840: 99.9290% ( 4) 00:08:43.172 28835.840 - 29037.489: 99.9503% ( 3) 00:08:43.172 29037.489 - 29239.138: 99.9787% ( 4) 00:08:43.172 29239.138 - 29440.788: 100.0000% ( 3) 00:08:43.172 00:08:43.172 21:59:15 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:44.560 Initializing NVMe Controllers 00:08:44.560 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.560 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.560 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.560 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.560 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.560 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.560 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.560 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.560 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.560 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.560 Initialization complete. Launching workers. 00:08:44.560 ======================================================== 00:08:44.560 Latency(us) 00:08:44.560 Device Information : IOPS MiB/s Average min max 00:08:44.560 PCIE (0000:00:10.0) NSID 1 from core 0: 14270.64 167.23 8980.81 5706.62 33414.06 00:08:44.560 PCIE (0000:00:11.0) NSID 1 from core 0: 14270.64 167.23 8967.04 6023.69 31732.99 00:08:44.560 PCIE (0000:00:13.0) NSID 1 from core 0: 14270.64 167.23 8952.73 5752.17 30712.71 00:08:44.560 PCIE (0000:00:12.0) NSID 1 from core 0: 14270.64 167.23 8938.52 5782.14 29008.39 00:08:44.560 PCIE (0000:00:12.0) NSID 2 from core 0: 14270.64 167.23 8924.42 5845.37 27108.46 00:08:44.560 PCIE (0000:00:12.0) NSID 3 from core 0: 14270.64 167.23 8910.35 5777.18 25348.88 00:08:44.560 ======================================================== 00:08:44.560 Total : 85623.87 1003.40 8945.64 5706.62 33414.06 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6175.508us 00:08:44.560 10.00000% : 7309.785us 00:08:44.560 25.00000% : 8015.557us 00:08:44.560 50.00000% : 8570.092us 00:08:44.560 75.00000% : 9427.102us 00:08:44.560 90.00000% : 10586.585us 00:08:44.560 95.00000% : 12250.191us 00:08:44.560 98.00000% : 14417.920us 00:08:44.560 99.00000% : 16232.763us 00:08:44.560 99.50000% : 26819.348us 00:08:44.560 99.90000% : 33070.474us 00:08:44.560 99.99000% : 33473.772us 00:08:44.560 99.99900% : 33473.772us 00:08:44.560 99.99990% : 33473.772us 00:08:44.560 99.99999% : 33473.772us 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6301.538us 00:08:44.560 10.00000% : 7309.785us 00:08:44.560 25.00000% : 8065.969us 00:08:44.560 50.00000% : 8570.092us 00:08:44.560 75.00000% : 9326.277us 00:08:44.560 90.00000% : 10435.348us 00:08:44.560 95.00000% : 12300.603us 00:08:44.560 98.00000% : 14821.218us 00:08:44.560 99.00000% : 15829.465us 00:08:44.560 99.50000% : 26012.751us 00:08:44.560 99.90000% : 31457.280us 00:08:44.560 99.99000% : 31860.578us 00:08:44.560 99.99900% : 31860.578us 00:08:44.560 99.99990% : 31860.578us 00:08:44.560 99.99999% : 31860.578us 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6200.714us 00:08:44.560 10.00000% : 7360.197us 00:08:44.560 25.00000% : 8065.969us 00:08:44.560 50.00000% : 8519.680us 00:08:44.560 75.00000% : 9376.689us 00:08:44.560 90.00000% : 10485.760us 00:08:44.560 95.00000% : 12300.603us 00:08:44.560 98.00000% : 14317.095us 00:08:44.560 99.00000% : 15728.640us 00:08:44.560 99.50000% : 24802.855us 00:08:44.560 99.90000% : 30449.034us 00:08:44.560 99.99000% : 30852.332us 00:08:44.560 99.99900% : 30852.332us 00:08:44.560 99.99990% : 30852.332us 00:08:44.560 99.99999% : 30852.332us 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6150.302us 00:08:44.560 10.00000% : 7360.197us 00:08:44.560 25.00000% : 8065.969us 00:08:44.560 50.00000% : 8570.092us 00:08:44.560 75.00000% : 9376.689us 00:08:44.560 90.00000% : 10435.348us 00:08:44.560 95.00000% : 11998.129us 00:08:44.560 98.00000% : 14317.095us 00:08:44.560 99.00000% : 15224.517us 00:08:44.560 99.50000% : 23189.662us 00:08:44.560 99.90000% : 28634.191us 00:08:44.560 99.99000% : 29037.489us 00:08:44.560 99.99900% : 29037.489us 00:08:44.560 99.99990% : 29037.489us 00:08:44.560 99.99999% : 29037.489us 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6175.508us 00:08:44.560 10.00000% : 7309.785us 00:08:44.560 25.00000% : 8065.969us 00:08:44.560 50.00000% : 8570.092us 00:08:44.560 75.00000% : 9427.102us 00:08:44.560 90.00000% : 10485.760us 00:08:44.560 95.00000% : 11897.305us 00:08:44.560 98.00000% : 14417.920us 00:08:44.560 99.00000% : 14821.218us 00:08:44.560 99.50000% : 22080.591us 00:08:44.560 99.90000% : 26819.348us 00:08:44.560 99.99000% : 27222.646us 00:08:44.560 99.99900% : 27222.646us 00:08:44.560 99.99990% : 27222.646us 00:08:44.560 99.99999% : 27222.646us 00:08:44.560 00:08:44.560 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:44.560 ================================================================================= 00:08:44.560 1.00000% : 6200.714us 00:08:44.560 10.00000% : 7309.785us 00:08:44.560 25.00000% : 8065.969us 00:08:44.560 50.00000% : 8570.092us 00:08:44.560 75.00000% : 9376.689us 00:08:44.560 90.00000% : 10536.172us 00:08:44.560 95.00000% : 12199.778us 00:08:44.560 98.00000% : 14417.920us 00:08:44.560 99.00000% : 15325.342us 00:08:44.560 99.50000% : 20568.222us 00:08:44.560 99.90000% : 25004.505us 00:08:44.560 99.99000% : 25407.803us 00:08:44.560 99.99900% : 25407.803us 00:08:44.560 99.99990% : 25407.803us 00:08:44.560 99.99999% : 25407.803us 00:08:44.560 00:08:44.560 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:44.560 ============================================================================== 00:08:44.560 Range in us Cumulative IO count 00:08:44.560 5696.591 - 5721.797: 0.0070% ( 1) 00:08:44.560 5822.622 - 5847.828: 0.0140% ( 1) 00:08:44.560 5847.828 - 5873.034: 0.0490% ( 5) 00:08:44.560 5873.034 - 5898.240: 0.0561% ( 1) 00:08:44.560 5898.240 - 5923.446: 0.0771% ( 3) 00:08:44.560 5923.446 - 5948.652: 0.1121% ( 5) 00:08:44.560 5948.652 - 5973.858: 0.1401% ( 4) 00:08:44.560 5973.858 - 5999.065: 0.1752% ( 5) 00:08:44.560 5999.065 - 6024.271: 0.3013% ( 18) 00:08:44.560 6024.271 - 6049.477: 0.4134% ( 16) 00:08:44.560 6049.477 - 6074.683: 0.5255% ( 16) 00:08:44.560 6074.683 - 6099.889: 0.6306% ( 15) 00:08:44.560 6099.889 - 6125.095: 0.7988% ( 24) 00:08:44.560 6125.095 - 6150.302: 0.9039% ( 15) 00:08:44.560 6150.302 - 6175.508: 1.0300% ( 18) 00:08:44.560 6175.508 - 6200.714: 1.1421% ( 16) 00:08:44.560 6200.714 - 6225.920: 1.2402% ( 14) 00:08:44.560 6225.920 - 6251.126: 1.3313% ( 13) 00:08:44.560 6251.126 - 6276.332: 1.4364% ( 15) 00:08:44.560 6276.332 - 6301.538: 1.5415% ( 15) 00:08:44.560 6301.538 - 6326.745: 1.6886% ( 21) 00:08:44.560 6326.745 - 6351.951: 1.9339% ( 35) 00:08:44.560 6351.951 - 6377.157: 2.2211% ( 41) 00:08:44.560 6377.157 - 6402.363: 2.5645% ( 49) 00:08:44.560 6402.363 - 6427.569: 2.9008% ( 48) 00:08:44.560 6427.569 - 6452.775: 3.2161% ( 45) 00:08:44.560 6452.775 - 6503.188: 3.9448% ( 104) 00:08:44.560 6503.188 - 6553.600: 4.5684% ( 89) 00:08:44.560 6553.600 - 6604.012: 5.1710% ( 86) 00:08:44.560 6604.012 - 6654.425: 5.7946% ( 89) 00:08:44.560 6654.425 - 6704.837: 6.1519% ( 51) 00:08:44.560 6704.837 - 6755.249: 6.6634% ( 73) 00:08:44.560 6755.249 - 6805.662: 7.0207% ( 51) 00:08:44.560 6805.662 - 6856.074: 7.3010% ( 40) 00:08:44.560 6856.074 - 6906.486: 7.4692% ( 24) 00:08:44.560 6906.486 - 6956.898: 7.6934% ( 32) 00:08:44.560 6956.898 - 7007.311: 7.8896% ( 28) 00:08:44.560 7007.311 - 7057.723: 8.0017% ( 16) 00:08:44.560 7057.723 - 7108.135: 8.2329% ( 33) 00:08:44.560 7108.135 - 7158.548: 8.6043% ( 53) 00:08:44.560 7158.548 - 7208.960: 9.2419% ( 91) 00:08:44.560 7208.960 - 7259.372: 9.7253% ( 69) 00:08:44.560 7259.372 - 7309.785: 10.2508% ( 75) 00:08:44.560 7309.785 - 7360.197: 10.7483% ( 71) 00:08:44.560 7360.197 - 7410.609: 11.2248% ( 68) 00:08:44.560 7410.609 - 7461.022: 11.6732% ( 64) 00:08:44.560 7461.022 - 7511.434: 12.2618% ( 84) 00:08:44.560 7511.434 - 7561.846: 12.8503% ( 84) 00:08:44.561 7561.846 - 7612.258: 13.7612% ( 130) 00:08:44.561 7612.258 - 7662.671: 14.8683% ( 158) 00:08:44.561 7662.671 - 7713.083: 16.0174% ( 164) 00:08:44.561 7713.083 - 7763.495: 17.4888% ( 210) 00:08:44.561 7763.495 - 7813.908: 19.0303% ( 220) 00:08:44.561 7813.908 - 7864.320: 20.8800% ( 264) 00:08:44.561 7864.320 - 7914.732: 23.0521% ( 310) 00:08:44.561 7914.732 - 7965.145: 24.9159% ( 266) 00:08:44.561 7965.145 - 8015.557: 26.8848% ( 281) 00:08:44.561 8015.557 - 8065.969: 28.9027% ( 288) 00:08:44.561 8065.969 - 8116.382: 30.8296% ( 275) 00:08:44.561 8116.382 - 8166.794: 32.9807% ( 307) 00:08:44.561 8166.794 - 8217.206: 35.2929% ( 330) 00:08:44.561 8217.206 - 8267.618: 37.6962% ( 343) 00:08:44.561 8267.618 - 8318.031: 40.0785% ( 340) 00:08:44.561 8318.031 - 8368.443: 42.3907% ( 330) 00:08:44.561 8368.443 - 8418.855: 44.9692% ( 368) 00:08:44.561 8418.855 - 8469.268: 47.2534% ( 326) 00:08:44.561 8469.268 - 8519.680: 49.3974% ( 306) 00:08:44.561 8519.680 - 8570.092: 51.2332% ( 262) 00:08:44.561 8570.092 - 8620.505: 53.2021% ( 281) 00:08:44.561 8620.505 - 8670.917: 55.0729% ( 267) 00:08:44.561 8670.917 - 8721.329: 56.9577% ( 269) 00:08:44.561 8721.329 - 8771.742: 58.8565% ( 271) 00:08:44.561 8771.742 - 8822.154: 60.7483% ( 270) 00:08:44.561 8822.154 - 8872.566: 62.3038% ( 222) 00:08:44.561 8872.566 - 8922.978: 63.8453% ( 220) 00:08:44.561 8922.978 - 8973.391: 65.4218% ( 225) 00:08:44.561 8973.391 - 9023.803: 66.7601% ( 191) 00:08:44.561 9023.803 - 9074.215: 68.1895% ( 204) 00:08:44.561 9074.215 - 9124.628: 69.5838% ( 199) 00:08:44.561 9124.628 - 9175.040: 70.7679% ( 169) 00:08:44.561 9175.040 - 9225.452: 71.9170% ( 164) 00:08:44.561 9225.452 - 9275.865: 72.9120% ( 142) 00:08:44.561 9275.865 - 9326.277: 73.9840% ( 153) 00:08:44.561 9326.277 - 9376.689: 74.9299% ( 135) 00:08:44.561 9376.689 - 9427.102: 75.8058% ( 125) 00:08:44.561 9427.102 - 9477.514: 76.9058% ( 157) 00:08:44.561 9477.514 - 9527.926: 77.8798% ( 139) 00:08:44.561 9527.926 - 9578.338: 78.7416% ( 123) 00:08:44.561 9578.338 - 9628.751: 79.5684% ( 118) 00:08:44.561 9628.751 - 9679.163: 80.2901% ( 103) 00:08:44.561 9679.163 - 9729.575: 80.9978% ( 101) 00:08:44.561 9729.575 - 9779.988: 81.7685% ( 110) 00:08:44.561 9779.988 - 9830.400: 82.4762% ( 101) 00:08:44.561 9830.400 - 9880.812: 83.1979% ( 103) 00:08:44.561 9880.812 - 9931.225: 83.8775% ( 97) 00:08:44.561 9931.225 - 9981.637: 84.5291% ( 93) 00:08:44.561 9981.637 - 10032.049: 85.1457% ( 88) 00:08:44.561 10032.049 - 10082.462: 85.7483% ( 86) 00:08:44.561 10082.462 - 10132.874: 86.2878% ( 77) 00:08:44.561 10132.874 - 10183.286: 86.7853% ( 71) 00:08:44.561 10183.286 - 10233.698: 87.2968% ( 73) 00:08:44.561 10233.698 - 10284.111: 87.8994% ( 86) 00:08:44.561 10284.111 - 10334.523: 88.4529% ( 79) 00:08:44.561 10334.523 - 10384.935: 88.9434% ( 70) 00:08:44.561 10384.935 - 10435.348: 89.3568% ( 59) 00:08:44.561 10435.348 - 10485.760: 89.6721% ( 45) 00:08:44.561 10485.760 - 10536.172: 89.9944% ( 46) 00:08:44.561 10536.172 - 10586.585: 90.3517% ( 51) 00:08:44.561 10586.585 - 10636.997: 90.6460% ( 42) 00:08:44.561 10636.997 - 10687.409: 90.8842% ( 34) 00:08:44.561 10687.409 - 10737.822: 91.1645% ( 40) 00:08:44.561 10737.822 - 10788.234: 91.3607% ( 28) 00:08:44.561 10788.234 - 10838.646: 91.5709% ( 30) 00:08:44.561 10838.646 - 10889.058: 91.8372% ( 38) 00:08:44.561 10889.058 - 10939.471: 92.0684% ( 33) 00:08:44.561 10939.471 - 10989.883: 92.2856% ( 31) 00:08:44.561 10989.883 - 11040.295: 92.4818% ( 28) 00:08:44.561 11040.295 - 11090.708: 92.6850% ( 29) 00:08:44.561 11090.708 - 11141.120: 92.8111% ( 18) 00:08:44.561 11141.120 - 11191.532: 92.9933% ( 26) 00:08:44.561 11191.532 - 11241.945: 93.1334% ( 20) 00:08:44.561 11241.945 - 11292.357: 93.2525% ( 17) 00:08:44.561 11292.357 - 11342.769: 93.3576% ( 15) 00:08:44.561 11342.769 - 11393.182: 93.4978% ( 20) 00:08:44.561 11393.182 - 11443.594: 93.5468% ( 7) 00:08:44.561 11443.594 - 11494.006: 93.6379% ( 13) 00:08:44.561 11494.006 - 11544.418: 93.7150% ( 11) 00:08:44.561 11544.418 - 11594.831: 93.8131% ( 14) 00:08:44.561 11594.831 - 11645.243: 93.9182% ( 15) 00:08:44.561 11645.243 - 11695.655: 94.0233% ( 15) 00:08:44.561 11695.655 - 11746.068: 94.1424% ( 17) 00:08:44.561 11746.068 - 11796.480: 94.1984% ( 8) 00:08:44.561 11796.480 - 11846.892: 94.2825% ( 12) 00:08:44.561 11846.892 - 11897.305: 94.3736% ( 13) 00:08:44.561 11897.305 - 11947.717: 94.5207% ( 21) 00:08:44.561 11947.717 - 11998.129: 94.6539% ( 19) 00:08:44.561 11998.129 - 12048.542: 94.7450% ( 13) 00:08:44.561 12048.542 - 12098.954: 94.8781% ( 19) 00:08:44.561 12098.954 - 12149.366: 94.9622% ( 12) 00:08:44.561 12149.366 - 12199.778: 94.9902% ( 4) 00:08:44.561 12199.778 - 12250.191: 95.0392% ( 7) 00:08:44.561 12250.191 - 12300.603: 95.0883% ( 7) 00:08:44.561 12300.603 - 12351.015: 95.1443% ( 8) 00:08:44.561 12351.015 - 12401.428: 95.1934% ( 7) 00:08:44.561 12401.428 - 12451.840: 95.2494% ( 8) 00:08:44.561 12451.840 - 12502.252: 95.3405% ( 13) 00:08:44.561 12502.252 - 12552.665: 95.4036% ( 9) 00:08:44.561 12552.665 - 12603.077: 95.4737% ( 10) 00:08:44.561 12603.077 - 12653.489: 95.5297% ( 8) 00:08:44.561 12653.489 - 12703.902: 95.6138% ( 12) 00:08:44.561 12703.902 - 12754.314: 95.6909% ( 11) 00:08:44.561 12754.314 - 12804.726: 95.7609% ( 10) 00:08:44.561 12804.726 - 12855.138: 95.8170% ( 8) 00:08:44.561 12855.138 - 12905.551: 95.8800% ( 9) 00:08:44.561 12905.551 - 13006.375: 96.0202% ( 20) 00:08:44.561 13006.375 - 13107.200: 96.1603% ( 20) 00:08:44.561 13107.200 - 13208.025: 96.3145% ( 22) 00:08:44.561 13208.025 - 13308.849: 96.4896% ( 25) 00:08:44.561 13308.849 - 13409.674: 96.6858% ( 28) 00:08:44.561 13409.674 - 13510.498: 96.8890% ( 29) 00:08:44.561 13510.498 - 13611.323: 97.1272% ( 34) 00:08:44.561 13611.323 - 13712.148: 97.2534% ( 18) 00:08:44.561 13712.148 - 13812.972: 97.3585% ( 15) 00:08:44.561 13812.972 - 13913.797: 97.4846% ( 18) 00:08:44.561 13913.797 - 14014.622: 97.5897% ( 15) 00:08:44.561 14014.622 - 14115.446: 97.6808% ( 13) 00:08:44.561 14115.446 - 14216.271: 97.7859% ( 15) 00:08:44.561 14216.271 - 14317.095: 97.8770% ( 13) 00:08:44.561 14317.095 - 14417.920: 98.0101% ( 19) 00:08:44.561 14417.920 - 14518.745: 98.1082% ( 14) 00:08:44.561 14518.745 - 14619.569: 98.1783% ( 10) 00:08:44.561 14619.569 - 14720.394: 98.3044% ( 18) 00:08:44.561 14720.394 - 14821.218: 98.4515% ( 21) 00:08:44.561 14821.218 - 14922.043: 98.4795% ( 4) 00:08:44.561 14922.043 - 15022.868: 98.5566% ( 11) 00:08:44.561 15022.868 - 15123.692: 98.5846% ( 4) 00:08:44.561 15123.692 - 15224.517: 98.6547% ( 10) 00:08:44.561 15224.517 - 15325.342: 98.7248% ( 10) 00:08:44.561 15325.342 - 15426.166: 98.7598% ( 5) 00:08:44.561 15426.166 - 15526.991: 98.7808% ( 3) 00:08:44.561 15526.991 - 15627.815: 98.8018% ( 3) 00:08:44.561 15627.815 - 15728.640: 98.8229% ( 3) 00:08:44.561 15728.640 - 15829.465: 98.8579% ( 5) 00:08:44.561 15829.465 - 15930.289: 98.8929% ( 5) 00:08:44.561 15930.289 - 16031.114: 98.9350% ( 6) 00:08:44.561 16031.114 - 16131.938: 98.9840% ( 7) 00:08:44.561 16131.938 - 16232.763: 99.0261% ( 6) 00:08:44.561 16232.763 - 16333.588: 99.0751% ( 7) 00:08:44.561 16333.588 - 16434.412: 99.1031% ( 4) 00:08:44.561 25206.154 - 25306.978: 99.1452% ( 6) 00:08:44.561 25306.978 - 25407.803: 99.1592% ( 2) 00:08:44.561 25407.803 - 25508.628: 99.1732% ( 2) 00:08:44.561 25508.628 - 25609.452: 99.2012% ( 4) 00:08:44.561 25609.452 - 25710.277: 99.2223% ( 3) 00:08:44.561 25710.277 - 25811.102: 99.2503% ( 4) 00:08:44.561 25811.102 - 26012.751: 99.3414% ( 13) 00:08:44.561 26012.751 - 26214.400: 99.3764% ( 5) 00:08:44.561 26214.400 - 26416.049: 99.4254% ( 7) 00:08:44.561 26416.049 - 26617.698: 99.4745% ( 7) 00:08:44.561 26617.698 - 26819.348: 99.5305% ( 8) 00:08:44.561 26819.348 - 27020.997: 99.5516% ( 3) 00:08:44.561 31457.280 - 31658.929: 99.5866% ( 5) 00:08:44.561 31658.929 - 31860.578: 99.6427% ( 8) 00:08:44.561 31860.578 - 32062.228: 99.6847% ( 6) 00:08:44.561 32062.228 - 32263.877: 99.7267% ( 6) 00:08:44.561 32263.877 - 32465.526: 99.7828% ( 8) 00:08:44.561 32465.526 - 32667.175: 99.8318% ( 7) 00:08:44.561 32667.175 - 32868.825: 99.8739% ( 6) 00:08:44.561 32868.825 - 33070.474: 99.9159% ( 6) 00:08:44.561 33070.474 - 33272.123: 99.9720% ( 8) 00:08:44.561 33272.123 - 33473.772: 100.0000% ( 4) 00:08:44.561 00:08:44.561 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:44.561 ============================================================================== 00:08:44.561 Range in us Cumulative IO count 00:08:44.561 5999.065 - 6024.271: 0.0070% ( 1) 00:08:44.561 6024.271 - 6049.477: 0.0140% ( 1) 00:08:44.561 6049.477 - 6074.683: 0.0280% ( 2) 00:08:44.561 6074.683 - 6099.889: 0.0420% ( 2) 00:08:44.561 6099.889 - 6125.095: 0.0841% ( 6) 00:08:44.561 6125.095 - 6150.302: 0.2733% ( 27) 00:08:44.561 6150.302 - 6175.508: 0.3643% ( 13) 00:08:44.561 6175.508 - 6200.714: 0.4064% ( 6) 00:08:44.561 6200.714 - 6225.920: 0.5045% ( 14) 00:08:44.561 6225.920 - 6251.126: 0.6026% ( 14) 00:08:44.561 6251.126 - 6276.332: 0.7217% ( 17) 00:08:44.561 6276.332 - 6301.538: 1.0930% ( 53) 00:08:44.561 6301.538 - 6326.745: 1.5625% ( 67) 00:08:44.561 6326.745 - 6351.951: 1.9058% ( 49) 00:08:44.561 6351.951 - 6377.157: 2.0810% ( 25) 00:08:44.561 6377.157 - 6402.363: 2.4594% ( 54) 00:08:44.561 6402.363 - 6427.569: 2.9498% ( 70) 00:08:44.562 6427.569 - 6452.775: 3.1320% ( 26) 00:08:44.562 6452.775 - 6503.188: 3.9238% ( 113) 00:08:44.562 6503.188 - 6553.600: 4.5334% ( 87) 00:08:44.562 6553.600 - 6604.012: 5.2621% ( 104) 00:08:44.562 6604.012 - 6654.425: 5.6404% ( 54) 00:08:44.562 6654.425 - 6704.837: 5.8927% ( 36) 00:08:44.562 6704.837 - 6755.249: 6.5373% ( 92) 00:08:44.562 6755.249 - 6805.662: 6.7895% ( 36) 00:08:44.562 6805.662 - 6856.074: 6.9787% ( 27) 00:08:44.562 6856.074 - 6906.486: 7.1539% ( 25) 00:08:44.562 6906.486 - 6956.898: 7.3430% ( 27) 00:08:44.562 6956.898 - 7007.311: 7.6794% ( 48) 00:08:44.562 7007.311 - 7057.723: 8.1278% ( 64) 00:08:44.562 7057.723 - 7108.135: 8.5202% ( 56) 00:08:44.562 7108.135 - 7158.548: 8.6673% ( 21) 00:08:44.562 7158.548 - 7208.960: 9.0457% ( 54) 00:08:44.562 7208.960 - 7259.372: 9.3820% ( 48) 00:08:44.562 7259.372 - 7309.785: 10.0266% ( 92) 00:08:44.562 7309.785 - 7360.197: 10.6432% ( 88) 00:08:44.562 7360.197 - 7410.609: 10.9515% ( 44) 00:08:44.562 7410.609 - 7461.022: 11.2948% ( 49) 00:08:44.562 7461.022 - 7511.434: 11.6172% ( 46) 00:08:44.562 7511.434 - 7561.846: 11.9535% ( 48) 00:08:44.562 7561.846 - 7612.258: 12.5140% ( 80) 00:08:44.562 7612.258 - 7662.671: 13.2707% ( 108) 00:08:44.562 7662.671 - 7713.083: 13.9924% ( 103) 00:08:44.562 7713.083 - 7763.495: 14.8543% ( 123) 00:08:44.562 7763.495 - 7813.908: 16.4448% ( 227) 00:08:44.562 7813.908 - 7864.320: 18.1614% ( 245) 00:08:44.562 7864.320 - 7914.732: 20.1724% ( 287) 00:08:44.562 7914.732 - 7965.145: 22.3725% ( 314) 00:08:44.562 7965.145 - 8015.557: 24.9159% ( 363) 00:08:44.562 8015.557 - 8065.969: 27.3192% ( 343) 00:08:44.562 8065.969 - 8116.382: 29.5474% ( 318) 00:08:44.562 8116.382 - 8166.794: 31.8666% ( 331) 00:08:44.562 8166.794 - 8217.206: 34.1438% ( 325) 00:08:44.562 8217.206 - 8267.618: 36.7293% ( 369) 00:08:44.562 8267.618 - 8318.031: 39.2447% ( 359) 00:08:44.562 8318.031 - 8368.443: 41.7601% ( 359) 00:08:44.562 8368.443 - 8418.855: 44.4367% ( 382) 00:08:44.562 8418.855 - 8469.268: 47.0642% ( 375) 00:08:44.562 8469.268 - 8519.680: 49.6777% ( 373) 00:08:44.562 8519.680 - 8570.092: 51.8498% ( 310) 00:08:44.562 8570.092 - 8620.505: 54.1200% ( 324) 00:08:44.562 8620.505 - 8670.917: 56.8876% ( 395) 00:08:44.562 8670.917 - 8721.329: 58.8355% ( 278) 00:08:44.562 8721.329 - 8771.742: 60.3770% ( 220) 00:08:44.562 8771.742 - 8822.154: 62.0726% ( 242) 00:08:44.562 8822.154 - 8872.566: 63.7192% ( 235) 00:08:44.562 8872.566 - 8922.978: 65.3798% ( 237) 00:08:44.562 8922.978 - 8973.391: 66.8862% ( 215) 00:08:44.562 8973.391 - 9023.803: 68.2595% ( 196) 00:08:44.562 9023.803 - 9074.215: 69.4997% ( 177) 00:08:44.562 9074.215 - 9124.628: 70.6208% ( 160) 00:08:44.562 9124.628 - 9175.040: 71.9451% ( 189) 00:08:44.562 9175.040 - 9225.452: 73.0311% ( 155) 00:08:44.562 9225.452 - 9275.865: 74.0471% ( 145) 00:08:44.562 9275.865 - 9326.277: 75.0701% ( 146) 00:08:44.562 9326.277 - 9376.689: 75.8969% ( 118) 00:08:44.562 9376.689 - 9427.102: 76.7797% ( 126) 00:08:44.562 9427.102 - 9477.514: 77.5995% ( 117) 00:08:44.562 9477.514 - 9527.926: 78.3842% ( 112) 00:08:44.562 9527.926 - 9578.338: 79.2741% ( 127) 00:08:44.562 9578.338 - 9628.751: 80.0168% ( 106) 00:08:44.562 9628.751 - 9679.163: 80.7245% ( 101) 00:08:44.562 9679.163 - 9729.575: 81.6143% ( 127) 00:08:44.562 9729.575 - 9779.988: 82.2450% ( 90) 00:08:44.562 9779.988 - 9830.400: 83.0157% ( 110) 00:08:44.562 9830.400 - 9880.812: 83.8565% ( 120) 00:08:44.562 9880.812 - 9931.225: 84.4381% ( 83) 00:08:44.562 9931.225 - 9981.637: 85.0196% ( 83) 00:08:44.562 9981.637 - 10032.049: 85.7273% ( 101) 00:08:44.562 10032.049 - 10082.462: 86.2878% ( 80) 00:08:44.562 10082.462 - 10132.874: 86.8344% ( 78) 00:08:44.562 10132.874 - 10183.286: 87.4159% ( 83) 00:08:44.562 10183.286 - 10233.698: 87.9695% ( 79) 00:08:44.562 10233.698 - 10284.111: 88.5930% ( 89) 00:08:44.562 10284.111 - 10334.523: 89.2657% ( 96) 00:08:44.562 10334.523 - 10384.935: 89.7702% ( 72) 00:08:44.562 10384.935 - 10435.348: 90.1485% ( 54) 00:08:44.562 10435.348 - 10485.760: 90.5549% ( 58) 00:08:44.562 10485.760 - 10536.172: 90.8492% ( 42) 00:08:44.562 10536.172 - 10586.585: 91.1855% ( 48) 00:08:44.562 10586.585 - 10636.997: 91.5569% ( 53) 00:08:44.562 10636.997 - 10687.409: 91.7671% ( 30) 00:08:44.562 10687.409 - 10737.822: 91.9773% ( 30) 00:08:44.562 10737.822 - 10788.234: 92.1945% ( 31) 00:08:44.562 10788.234 - 10838.646: 92.3627% ( 24) 00:08:44.562 10838.646 - 10889.058: 92.5729% ( 30) 00:08:44.562 10889.058 - 10939.471: 92.7691% ( 28) 00:08:44.562 10939.471 - 10989.883: 92.9723% ( 29) 00:08:44.562 10989.883 - 11040.295: 93.0774% ( 15) 00:08:44.562 11040.295 - 11090.708: 93.1404% ( 9) 00:08:44.562 11090.708 - 11141.120: 93.2105% ( 10) 00:08:44.562 11141.120 - 11191.532: 93.2595% ( 7) 00:08:44.562 11191.532 - 11241.945: 93.3016% ( 6) 00:08:44.562 11241.945 - 11292.357: 93.3646% ( 9) 00:08:44.562 11292.357 - 11342.769: 93.4557% ( 13) 00:08:44.562 11342.769 - 11393.182: 93.6029% ( 21) 00:08:44.562 11393.182 - 11443.594: 93.7430% ( 20) 00:08:44.562 11443.594 - 11494.006: 93.8131% ( 10) 00:08:44.562 11494.006 - 11544.418: 93.8761% ( 9) 00:08:44.562 11544.418 - 11594.831: 93.9532% ( 11) 00:08:44.562 11594.831 - 11645.243: 94.0092% ( 8) 00:08:44.562 11645.243 - 11695.655: 94.0723% ( 9) 00:08:44.562 11695.655 - 11746.068: 94.1564% ( 12) 00:08:44.562 11746.068 - 11796.480: 94.2335% ( 11) 00:08:44.562 11796.480 - 11846.892: 94.3246% ( 13) 00:08:44.562 11846.892 - 11897.305: 94.4297% ( 15) 00:08:44.562 11897.305 - 11947.717: 94.5838% ( 22) 00:08:44.562 11947.717 - 11998.129: 94.6539% ( 10) 00:08:44.562 11998.129 - 12048.542: 94.7309% ( 11) 00:08:44.562 12048.542 - 12098.954: 94.7870% ( 8) 00:08:44.562 12098.954 - 12149.366: 94.8430% ( 8) 00:08:44.562 12149.366 - 12199.778: 94.8711% ( 4) 00:08:44.562 12199.778 - 12250.191: 94.9341% ( 9) 00:08:44.562 12250.191 - 12300.603: 95.0252% ( 13) 00:08:44.562 12300.603 - 12351.015: 95.0743% ( 7) 00:08:44.562 12351.015 - 12401.428: 95.1443% ( 10) 00:08:44.562 12401.428 - 12451.840: 95.2564% ( 16) 00:08:44.562 12451.840 - 12502.252: 95.3545% ( 14) 00:08:44.562 12502.252 - 12552.665: 95.4456% ( 13) 00:08:44.562 12552.665 - 12603.077: 95.5297% ( 12) 00:08:44.562 12603.077 - 12653.489: 95.6558% ( 18) 00:08:44.562 12653.489 - 12703.902: 95.7609% ( 15) 00:08:44.562 12703.902 - 12754.314: 95.8450% ( 12) 00:08:44.562 12754.314 - 12804.726: 95.9291% ( 12) 00:08:44.562 12804.726 - 12855.138: 96.0062% ( 11) 00:08:44.562 12855.138 - 12905.551: 96.0762% ( 10) 00:08:44.562 12905.551 - 13006.375: 96.2164% ( 20) 00:08:44.562 13006.375 - 13107.200: 96.3285% ( 16) 00:08:44.562 13107.200 - 13208.025: 96.4196% ( 13) 00:08:44.562 13208.025 - 13308.849: 96.4756% ( 8) 00:08:44.562 13308.849 - 13409.674: 96.5457% ( 10) 00:08:44.562 13409.674 - 13510.498: 96.6928% ( 21) 00:08:44.562 13510.498 - 13611.323: 96.8890% ( 28) 00:08:44.562 13611.323 - 13712.148: 97.0502% ( 23) 00:08:44.562 13712.148 - 13812.972: 97.2113% ( 23) 00:08:44.562 13812.972 - 13913.797: 97.3725% ( 23) 00:08:44.562 13913.797 - 14014.622: 97.4706% ( 14) 00:08:44.562 14014.622 - 14115.446: 97.5687% ( 14) 00:08:44.562 14115.446 - 14216.271: 97.6738% ( 15) 00:08:44.562 14216.271 - 14317.095: 97.7438% ( 10) 00:08:44.562 14317.095 - 14417.920: 97.7999% ( 8) 00:08:44.562 14417.920 - 14518.745: 97.8559% ( 8) 00:08:44.562 14518.745 - 14619.569: 97.9120% ( 8) 00:08:44.562 14619.569 - 14720.394: 97.9751% ( 9) 00:08:44.562 14720.394 - 14821.218: 98.0241% ( 7) 00:08:44.562 14821.218 - 14922.043: 98.0802% ( 8) 00:08:44.562 14922.043 - 15022.868: 98.1712% ( 13) 00:08:44.562 15022.868 - 15123.692: 98.3184% ( 21) 00:08:44.562 15123.692 - 15224.517: 98.4655% ( 21) 00:08:44.562 15224.517 - 15325.342: 98.7598% ( 42) 00:08:44.562 15325.342 - 15426.166: 98.8439% ( 12) 00:08:44.562 15426.166 - 15526.991: 98.9070% ( 9) 00:08:44.562 15526.991 - 15627.815: 98.9560% ( 7) 00:08:44.562 15627.815 - 15728.640: 98.9980% ( 6) 00:08:44.562 15728.640 - 15829.465: 99.0331% ( 5) 00:08:44.562 15829.465 - 15930.289: 99.0751% ( 6) 00:08:44.562 15930.289 - 16031.114: 99.1031% ( 4) 00:08:44.562 24702.031 - 24802.855: 99.1101% ( 1) 00:08:44.562 24802.855 - 24903.680: 99.2152% ( 15) 00:08:44.562 24903.680 - 25004.505: 99.3133% ( 14) 00:08:44.562 25004.505 - 25105.329: 99.3344% ( 3) 00:08:44.562 25105.329 - 25206.154: 99.3554% ( 3) 00:08:44.562 25206.154 - 25306.978: 99.3764% ( 3) 00:08:44.562 25306.978 - 25407.803: 99.3974% ( 3) 00:08:44.562 25407.803 - 25508.628: 99.4184% ( 3) 00:08:44.562 25508.628 - 25609.452: 99.4395% ( 3) 00:08:44.562 25609.452 - 25710.277: 99.4605% ( 3) 00:08:44.562 25710.277 - 25811.102: 99.4885% ( 4) 00:08:44.562 25811.102 - 26012.751: 99.5235% ( 5) 00:08:44.562 26012.751 - 26214.400: 99.5516% ( 4) 00:08:44.562 28835.840 - 29037.489: 99.5866% ( 5) 00:08:44.562 30045.735 - 30247.385: 99.6146% ( 4) 00:08:44.562 30247.385 - 30449.034: 99.6637% ( 7) 00:08:44.562 30449.034 - 30650.683: 99.7127% ( 7) 00:08:44.562 30650.683 - 30852.332: 99.7688% ( 8) 00:08:44.562 30852.332 - 31053.982: 99.8248% ( 8) 00:08:44.562 31053.982 - 31255.631: 99.8739% ( 7) 00:08:44.562 31255.631 - 31457.280: 99.9299% ( 8) 00:08:44.562 31457.280 - 31658.929: 99.9790% ( 7) 00:08:44.562 31658.929 - 31860.578: 100.0000% ( 3) 00:08:44.562 00:08:44.562 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:44.562 ============================================================================== 00:08:44.562 Range in us Cumulative IO count 00:08:44.562 5747.003 - 5772.209: 0.0070% ( 1) 00:08:44.562 5822.622 - 5847.828: 0.0140% ( 1) 00:08:44.563 5847.828 - 5873.034: 0.0210% ( 1) 00:08:44.563 5873.034 - 5898.240: 0.0350% ( 2) 00:08:44.563 5898.240 - 5923.446: 0.0631% ( 4) 00:08:44.563 5923.446 - 5948.652: 0.0841% ( 3) 00:08:44.563 5948.652 - 5973.858: 0.1051% ( 3) 00:08:44.563 5973.858 - 5999.065: 0.1191% ( 2) 00:08:44.563 5999.065 - 6024.271: 0.1401% ( 3) 00:08:44.563 6024.271 - 6049.477: 0.1822% ( 6) 00:08:44.563 6049.477 - 6074.683: 0.2733% ( 13) 00:08:44.563 6074.683 - 6099.889: 0.4274% ( 22) 00:08:44.563 6099.889 - 6125.095: 0.6026% ( 25) 00:08:44.563 6125.095 - 6150.302: 0.8058% ( 29) 00:08:44.563 6150.302 - 6175.508: 0.9879% ( 26) 00:08:44.563 6175.508 - 6200.714: 1.2892% ( 43) 00:08:44.563 6200.714 - 6225.920: 1.6256% ( 48) 00:08:44.563 6225.920 - 6251.126: 1.9198% ( 42) 00:08:44.563 6251.126 - 6276.332: 2.2141% ( 42) 00:08:44.563 6276.332 - 6301.538: 2.4313% ( 31) 00:08:44.563 6301.538 - 6326.745: 2.7046% ( 39) 00:08:44.563 6326.745 - 6351.951: 3.0689% ( 52) 00:08:44.563 6351.951 - 6377.157: 3.3072% ( 34) 00:08:44.563 6377.157 - 6402.363: 3.5384% ( 33) 00:08:44.563 6402.363 - 6427.569: 3.7346% ( 28) 00:08:44.563 6427.569 - 6452.775: 3.9238% ( 27) 00:08:44.563 6452.775 - 6503.188: 4.2391% ( 45) 00:08:44.563 6503.188 - 6553.600: 4.4703% ( 33) 00:08:44.563 6553.600 - 6604.012: 4.8206% ( 50) 00:08:44.563 6604.012 - 6654.425: 5.1640% ( 49) 00:08:44.563 6654.425 - 6704.837: 5.3041% ( 20) 00:08:44.563 6704.837 - 6755.249: 5.6614% ( 51) 00:08:44.563 6755.249 - 6805.662: 5.9207% ( 37) 00:08:44.563 6805.662 - 6856.074: 6.2990% ( 54) 00:08:44.563 6856.074 - 6906.486: 6.6003% ( 43) 00:08:44.563 6906.486 - 6956.898: 6.9086% ( 44) 00:08:44.563 6956.898 - 7007.311: 7.4902% ( 83) 00:08:44.563 7007.311 - 7057.723: 8.0717% ( 83) 00:08:44.563 7057.723 - 7108.135: 8.4922% ( 60) 00:08:44.563 7108.135 - 7158.548: 8.8145% ( 46) 00:08:44.563 7158.548 - 7208.960: 9.3820% ( 81) 00:08:44.563 7208.960 - 7259.372: 9.6623% ( 40) 00:08:44.563 7259.372 - 7309.785: 9.9566% ( 42) 00:08:44.563 7309.785 - 7360.197: 10.2438% ( 41) 00:08:44.563 7360.197 - 7410.609: 10.5732% ( 47) 00:08:44.563 7410.609 - 7461.022: 11.1267% ( 79) 00:08:44.563 7461.022 - 7511.434: 11.9044% ( 111) 00:08:44.563 7511.434 - 7561.846: 12.4860% ( 83) 00:08:44.563 7561.846 - 7612.258: 13.2497% ( 109) 00:08:44.563 7612.258 - 7662.671: 14.0765% ( 118) 00:08:44.563 7662.671 - 7713.083: 15.1415% ( 152) 00:08:44.563 7713.083 - 7763.495: 16.1785% ( 148) 00:08:44.563 7763.495 - 7813.908: 17.6570% ( 211) 00:08:44.563 7813.908 - 7864.320: 19.0022% ( 192) 00:08:44.563 7864.320 - 7914.732: 20.6839% ( 240) 00:08:44.563 7914.732 - 7965.145: 22.3515% ( 238) 00:08:44.563 7965.145 - 8015.557: 24.1242% ( 253) 00:08:44.563 8015.557 - 8065.969: 26.2962% ( 310) 00:08:44.563 8065.969 - 8116.382: 28.6645% ( 338) 00:08:44.563 8116.382 - 8166.794: 31.2710% ( 372) 00:08:44.563 8166.794 - 8217.206: 34.2138% ( 420) 00:08:44.563 8217.206 - 8267.618: 37.0165% ( 400) 00:08:44.563 8267.618 - 8318.031: 39.9033% ( 412) 00:08:44.563 8318.031 - 8368.443: 42.6359% ( 390) 00:08:44.563 8368.443 - 8418.855: 45.3055% ( 381) 00:08:44.563 8418.855 - 8469.268: 48.0031% ( 385) 00:08:44.563 8469.268 - 8519.680: 50.2943% ( 327) 00:08:44.563 8519.680 - 8570.092: 52.7256% ( 347) 00:08:44.563 8570.092 - 8620.505: 54.8066% ( 297) 00:08:44.563 8620.505 - 8670.917: 57.0488% ( 320) 00:08:44.563 8670.917 - 8721.329: 59.0457% ( 285) 00:08:44.563 8721.329 - 8771.742: 60.7553% ( 244) 00:08:44.563 8771.742 - 8822.154: 62.1987% ( 206) 00:08:44.563 8822.154 - 8872.566: 63.5160% ( 188) 00:08:44.563 8872.566 - 8922.978: 64.7702% ( 179) 00:08:44.563 8922.978 - 8973.391: 65.8842% ( 159) 00:08:44.563 8973.391 - 9023.803: 67.0964% ( 173) 00:08:44.563 9023.803 - 9074.215: 68.2735% ( 168) 00:08:44.563 9074.215 - 9124.628: 69.5908% ( 188) 00:08:44.563 9124.628 - 9175.040: 70.9501% ( 194) 00:08:44.563 9175.040 - 9225.452: 72.1132% ( 166) 00:08:44.563 9225.452 - 9275.865: 73.1222% ( 144) 00:08:44.563 9275.865 - 9326.277: 74.2573% ( 162) 00:08:44.563 9326.277 - 9376.689: 75.4765% ( 174) 00:08:44.563 9376.689 - 9427.102: 76.6186% ( 163) 00:08:44.563 9427.102 - 9477.514: 77.5575% ( 134) 00:08:44.563 9477.514 - 9527.926: 78.4333% ( 125) 00:08:44.563 9527.926 - 9578.338: 79.3442% ( 130) 00:08:44.563 9578.338 - 9628.751: 80.2761% ( 133) 00:08:44.563 9628.751 - 9679.163: 81.0608% ( 112) 00:08:44.563 9679.163 - 9729.575: 81.8946% ( 119) 00:08:44.563 9729.575 - 9779.988: 82.6584% ( 109) 00:08:44.563 9779.988 - 9830.400: 83.3240% ( 95) 00:08:44.563 9830.400 - 9880.812: 83.9756% ( 93) 00:08:44.563 9880.812 - 9931.225: 84.6132% ( 91) 00:08:44.563 9931.225 - 9981.637: 85.2158% ( 86) 00:08:44.563 9981.637 - 10032.049: 85.7693% ( 79) 00:08:44.563 10032.049 - 10082.462: 86.3929% ( 89) 00:08:44.563 10082.462 - 10132.874: 86.9955% ( 86) 00:08:44.563 10132.874 - 10183.286: 87.5911% ( 85) 00:08:44.563 10183.286 - 10233.698: 88.1236% ( 76) 00:08:44.563 10233.698 - 10284.111: 88.5790% ( 65) 00:08:44.563 10284.111 - 10334.523: 89.0345% ( 65) 00:08:44.563 10334.523 - 10384.935: 89.5179% ( 69) 00:08:44.563 10384.935 - 10435.348: 89.8473% ( 47) 00:08:44.563 10435.348 - 10485.760: 90.2186% ( 53) 00:08:44.563 10485.760 - 10536.172: 90.4919% ( 39) 00:08:44.563 10536.172 - 10586.585: 90.8072% ( 45) 00:08:44.563 10586.585 - 10636.997: 91.0664% ( 37) 00:08:44.563 10636.997 - 10687.409: 91.3607% ( 42) 00:08:44.563 10687.409 - 10737.822: 91.5569% ( 28) 00:08:44.563 10737.822 - 10788.234: 91.8232% ( 38) 00:08:44.563 10788.234 - 10838.646: 92.0263% ( 29) 00:08:44.563 10838.646 - 10889.058: 92.2155% ( 27) 00:08:44.563 10889.058 - 10939.471: 92.3837% ( 24) 00:08:44.563 10939.471 - 10989.883: 92.5939% ( 30) 00:08:44.563 10989.883 - 11040.295: 92.8251% ( 33) 00:08:44.563 11040.295 - 11090.708: 93.0563% ( 33) 00:08:44.563 11090.708 - 11141.120: 93.2245% ( 24) 00:08:44.563 11141.120 - 11191.532: 93.3506% ( 18) 00:08:44.563 11191.532 - 11241.945: 93.4277% ( 11) 00:08:44.563 11241.945 - 11292.357: 93.4978% ( 10) 00:08:44.563 11292.357 - 11342.769: 93.6099% ( 16) 00:08:44.563 11342.769 - 11393.182: 93.6939% ( 12) 00:08:44.563 11393.182 - 11443.594: 93.7710% ( 11) 00:08:44.563 11443.594 - 11494.006: 93.8831% ( 16) 00:08:44.563 11494.006 - 11544.418: 94.0163% ( 19) 00:08:44.563 11544.418 - 11594.831: 94.1214% ( 15) 00:08:44.563 11594.831 - 11645.243: 94.2895% ( 24) 00:08:44.563 11645.243 - 11695.655: 94.3596% ( 10) 00:08:44.563 11695.655 - 11746.068: 94.4086% ( 7) 00:08:44.563 11746.068 - 11796.480: 94.4927% ( 12) 00:08:44.563 11796.480 - 11846.892: 94.5418% ( 7) 00:08:44.563 11846.892 - 11897.305: 94.5908% ( 7) 00:08:44.563 11897.305 - 11947.717: 94.6399% ( 7) 00:08:44.563 11947.717 - 11998.129: 94.6749% ( 5) 00:08:44.563 11998.129 - 12048.542: 94.7169% ( 6) 00:08:44.563 12048.542 - 12098.954: 94.7730% ( 8) 00:08:44.563 12098.954 - 12149.366: 94.8501% ( 11) 00:08:44.563 12149.366 - 12199.778: 94.9271% ( 11) 00:08:44.563 12199.778 - 12250.191: 94.9552% ( 4) 00:08:44.563 12250.191 - 12300.603: 95.0112% ( 8) 00:08:44.563 12300.603 - 12351.015: 95.0462% ( 5) 00:08:44.563 12351.015 - 12401.428: 95.0953% ( 7) 00:08:44.563 12401.428 - 12451.840: 95.1443% ( 7) 00:08:44.563 12451.840 - 12502.252: 95.1724% ( 4) 00:08:44.563 12502.252 - 12552.665: 95.1794% ( 1) 00:08:44.563 12552.665 - 12603.077: 95.1934% ( 2) 00:08:44.563 12603.077 - 12653.489: 95.2214% ( 4) 00:08:44.563 12653.489 - 12703.902: 95.2354% ( 2) 00:08:44.563 12703.902 - 12754.314: 95.2635% ( 4) 00:08:44.563 12754.314 - 12804.726: 95.2915% ( 4) 00:08:44.563 12804.726 - 12855.138: 95.3545% ( 9) 00:08:44.563 12855.138 - 12905.551: 95.4526% ( 14) 00:08:44.563 12905.551 - 13006.375: 95.7609% ( 44) 00:08:44.563 13006.375 - 13107.200: 95.9151% ( 22) 00:08:44.563 13107.200 - 13208.025: 96.1533% ( 34) 00:08:44.563 13208.025 - 13308.849: 96.4266% ( 39) 00:08:44.563 13308.849 - 13409.674: 96.6158% ( 27) 00:08:44.563 13409.674 - 13510.498: 96.7909% ( 25) 00:08:44.563 13510.498 - 13611.323: 96.9451% ( 22) 00:08:44.563 13611.323 - 13712.148: 97.1763% ( 33) 00:08:44.563 13712.148 - 13812.972: 97.3585% ( 26) 00:08:44.563 13812.972 - 13913.797: 97.5617% ( 29) 00:08:44.563 13913.797 - 14014.622: 97.7018% ( 20) 00:08:44.563 14014.622 - 14115.446: 97.8419% ( 20) 00:08:44.563 14115.446 - 14216.271: 97.9540% ( 16) 00:08:44.563 14216.271 - 14317.095: 98.0451% ( 13) 00:08:44.563 14317.095 - 14417.920: 98.1502% ( 15) 00:08:44.563 14417.920 - 14518.745: 98.2203% ( 10) 00:08:44.563 14518.745 - 14619.569: 98.3464% ( 18) 00:08:44.563 14619.569 - 14720.394: 98.4375% ( 13) 00:08:44.563 14720.394 - 14821.218: 98.5636% ( 18) 00:08:44.563 14821.218 - 14922.043: 98.6477% ( 12) 00:08:44.563 14922.043 - 15022.868: 98.6967% ( 7) 00:08:44.563 15022.868 - 15123.692: 98.7388% ( 6) 00:08:44.563 15123.692 - 15224.517: 98.7948% ( 8) 00:08:44.563 15224.517 - 15325.342: 98.8579% ( 9) 00:08:44.563 15325.342 - 15426.166: 98.9280% ( 10) 00:08:44.563 15426.166 - 15526.991: 98.9700% ( 6) 00:08:44.563 15526.991 - 15627.815: 98.9980% ( 4) 00:08:44.563 15627.815 - 15728.640: 99.0261% ( 4) 00:08:44.563 15728.640 - 15829.465: 99.0471% ( 3) 00:08:44.563 15829.465 - 15930.289: 99.0681% ( 3) 00:08:44.563 15930.289 - 16031.114: 99.0961% ( 4) 00:08:44.563 16031.114 - 16131.938: 99.1031% ( 1) 00:08:44.563 23189.662 - 23290.486: 99.1312% ( 4) 00:08:44.563 23290.486 - 23391.311: 99.1522% ( 3) 00:08:44.563 23391.311 - 23492.135: 99.1802% ( 4) 00:08:44.563 23492.135 - 23592.960: 99.2082% ( 4) 00:08:44.563 23592.960 - 23693.785: 99.2363% ( 4) 00:08:44.564 23693.785 - 23794.609: 99.2573% ( 3) 00:08:44.564 23794.609 - 23895.434: 99.2853% ( 4) 00:08:44.564 23895.434 - 23996.258: 99.3063% ( 3) 00:08:44.564 23996.258 - 24097.083: 99.3344% ( 4) 00:08:44.564 24097.083 - 24197.908: 99.3624% ( 4) 00:08:44.564 24197.908 - 24298.732: 99.3904% ( 4) 00:08:44.564 24298.732 - 24399.557: 99.4114% ( 3) 00:08:44.564 24399.557 - 24500.382: 99.4395% ( 4) 00:08:44.564 24500.382 - 24601.206: 99.4605% ( 3) 00:08:44.564 24601.206 - 24702.031: 99.4885% ( 4) 00:08:44.564 24702.031 - 24802.855: 99.5165% ( 4) 00:08:44.564 24802.855 - 24903.680: 99.5376% ( 3) 00:08:44.564 24903.680 - 25004.505: 99.5516% ( 2) 00:08:44.564 28835.840 - 29037.489: 99.5796% ( 4) 00:08:44.564 29037.489 - 29239.138: 99.6286% ( 7) 00:08:44.564 29239.138 - 29440.788: 99.6777% ( 7) 00:08:44.564 29440.788 - 29642.437: 99.7197% ( 6) 00:08:44.564 29642.437 - 29844.086: 99.7688% ( 7) 00:08:44.564 29844.086 - 30045.735: 99.8248% ( 8) 00:08:44.564 30045.735 - 30247.385: 99.8739% ( 7) 00:08:44.564 30247.385 - 30449.034: 99.9299% ( 8) 00:08:44.564 30449.034 - 30650.683: 99.9790% ( 7) 00:08:44.564 30650.683 - 30852.332: 100.0000% ( 3) 00:08:44.564 00:08:44.564 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:44.564 ============================================================================== 00:08:44.564 Range in us Cumulative IO count 00:08:44.564 5772.209 - 5797.415: 0.0070% ( 1) 00:08:44.564 5822.622 - 5847.828: 0.0140% ( 1) 00:08:44.564 5898.240 - 5923.446: 0.0280% ( 2) 00:08:44.564 5923.446 - 5948.652: 0.0561% ( 4) 00:08:44.564 5948.652 - 5973.858: 0.1051% ( 7) 00:08:44.564 5973.858 - 5999.065: 0.1822% ( 11) 00:08:44.564 5999.065 - 6024.271: 0.2452% ( 9) 00:08:44.564 6024.271 - 6049.477: 0.3153% ( 10) 00:08:44.564 6049.477 - 6074.683: 0.4554% ( 20) 00:08:44.564 6074.683 - 6099.889: 0.7918% ( 48) 00:08:44.564 6099.889 - 6125.095: 0.9809% ( 27) 00:08:44.564 6125.095 - 6150.302: 1.1561% ( 25) 00:08:44.564 6150.302 - 6175.508: 1.3453% ( 27) 00:08:44.564 6175.508 - 6200.714: 1.6816% ( 48) 00:08:44.564 6200.714 - 6225.920: 1.9619% ( 40) 00:08:44.564 6225.920 - 6251.126: 2.2492% ( 41) 00:08:44.564 6251.126 - 6276.332: 2.5224% ( 39) 00:08:44.564 6276.332 - 6301.538: 2.7186% ( 28) 00:08:44.564 6301.538 - 6326.745: 2.9779% ( 37) 00:08:44.564 6326.745 - 6351.951: 3.2021% ( 32) 00:08:44.564 6351.951 - 6377.157: 3.4823% ( 40) 00:08:44.564 6377.157 - 6402.363: 3.7976% ( 45) 00:08:44.564 6402.363 - 6427.569: 3.9378% ( 20) 00:08:44.564 6427.569 - 6452.775: 4.1270% ( 27) 00:08:44.564 6452.775 - 6503.188: 4.3862% ( 37) 00:08:44.564 6503.188 - 6553.600: 4.5824% ( 28) 00:08:44.564 6553.600 - 6604.012: 4.8136% ( 33) 00:08:44.564 6604.012 - 6654.425: 4.9888% ( 25) 00:08:44.564 6654.425 - 6704.837: 5.3321% ( 49) 00:08:44.564 6704.837 - 6755.249: 5.5003% ( 24) 00:08:44.564 6755.249 - 6805.662: 5.8016% ( 43) 00:08:44.564 6805.662 - 6856.074: 6.0048% ( 29) 00:08:44.564 6856.074 - 6906.486: 6.3831% ( 54) 00:08:44.564 6906.486 - 6956.898: 6.7475% ( 52) 00:08:44.564 6956.898 - 7007.311: 7.2239% ( 68) 00:08:44.564 7007.311 - 7057.723: 8.0017% ( 111) 00:08:44.564 7057.723 - 7108.135: 8.2749% ( 39) 00:08:44.564 7108.135 - 7158.548: 8.5552% ( 40) 00:08:44.564 7158.548 - 7208.960: 9.0177% ( 66) 00:08:44.564 7208.960 - 7259.372: 9.3820% ( 52) 00:08:44.564 7259.372 - 7309.785: 9.9355% ( 79) 00:08:44.564 7309.785 - 7360.197: 10.2368% ( 43) 00:08:44.564 7360.197 - 7410.609: 10.6502% ( 59) 00:08:44.564 7410.609 - 7461.022: 11.0636% ( 59) 00:08:44.564 7461.022 - 7511.434: 11.5471% ( 69) 00:08:44.564 7511.434 - 7561.846: 12.1006% ( 79) 00:08:44.564 7561.846 - 7612.258: 12.9975% ( 128) 00:08:44.564 7612.258 - 7662.671: 13.8733% ( 125) 00:08:44.564 7662.671 - 7713.083: 14.8963% ( 146) 00:08:44.564 7713.083 - 7763.495: 15.9753% ( 154) 00:08:44.564 7763.495 - 7813.908: 17.2225% ( 178) 00:08:44.564 7813.908 - 7864.320: 18.6519% ( 204) 00:08:44.564 7864.320 - 7914.732: 20.2635% ( 230) 00:08:44.564 7914.732 - 7965.145: 22.0151% ( 250) 00:08:44.564 7965.145 - 8015.557: 24.1172% ( 300) 00:08:44.564 8015.557 - 8065.969: 26.6045% ( 355) 00:08:44.564 8065.969 - 8116.382: 28.7066% ( 300) 00:08:44.564 8116.382 - 8166.794: 31.5723% ( 409) 00:08:44.564 8166.794 - 8217.206: 34.3680% ( 399) 00:08:44.564 8217.206 - 8267.618: 37.1146% ( 392) 00:08:44.564 8267.618 - 8318.031: 39.9033% ( 398) 00:08:44.564 8318.031 - 8368.443: 42.8742% ( 424) 00:08:44.564 8368.443 - 8418.855: 45.1303% ( 322) 00:08:44.564 8418.855 - 8469.268: 47.6878% ( 365) 00:08:44.564 8469.268 - 8519.680: 49.9860% ( 328) 00:08:44.564 8519.680 - 8570.092: 52.1090% ( 303) 00:08:44.564 8570.092 - 8620.505: 54.4563% ( 335) 00:08:44.564 8620.505 - 8670.917: 56.3271% ( 267) 00:08:44.564 8670.917 - 8721.329: 58.2820% ( 279) 00:08:44.564 8721.329 - 8771.742: 60.0476% ( 252) 00:08:44.564 8771.742 - 8822.154: 61.4420% ( 199) 00:08:44.564 8822.154 - 8872.566: 62.9064% ( 209) 00:08:44.564 8872.566 - 8922.978: 64.3147% ( 201) 00:08:44.564 8922.978 - 8973.391: 65.4498% ( 162) 00:08:44.564 8973.391 - 9023.803: 66.5499% ( 157) 00:08:44.564 9023.803 - 9074.215: 67.8742% ( 189) 00:08:44.564 9074.215 - 9124.628: 69.1003% ( 175) 00:08:44.564 9124.628 - 9175.040: 70.3756% ( 182) 00:08:44.564 9175.040 - 9225.452: 71.6087% ( 176) 00:08:44.564 9225.452 - 9275.865: 72.8489% ( 177) 00:08:44.564 9275.865 - 9326.277: 74.0681% ( 174) 00:08:44.564 9326.277 - 9376.689: 75.2592% ( 170) 00:08:44.564 9376.689 - 9427.102: 76.4434% ( 169) 00:08:44.564 9427.102 - 9477.514: 77.4313% ( 141) 00:08:44.564 9477.514 - 9527.926: 78.3072% ( 125) 00:08:44.564 9527.926 - 9578.338: 79.1129% ( 115) 00:08:44.564 9578.338 - 9628.751: 79.9467% ( 119) 00:08:44.564 9628.751 - 9679.163: 80.8296% ( 126) 00:08:44.564 9679.163 - 9729.575: 81.6844% ( 122) 00:08:44.564 9729.575 - 9779.988: 82.5462% ( 123) 00:08:44.564 9779.988 - 9830.400: 83.3170% ( 110) 00:08:44.564 9830.400 - 9880.812: 84.1087% ( 113) 00:08:44.564 9880.812 - 9931.225: 84.7674% ( 94) 00:08:44.564 9931.225 - 9981.637: 85.4891% ( 103) 00:08:44.564 9981.637 - 10032.049: 86.1407% ( 93) 00:08:44.564 10032.049 - 10082.462: 86.7503% ( 87) 00:08:44.564 10082.462 - 10132.874: 87.3669% ( 88) 00:08:44.564 10132.874 - 10183.286: 87.9134% ( 78) 00:08:44.564 10183.286 - 10233.698: 88.4039% ( 70) 00:08:44.564 10233.698 - 10284.111: 88.8873% ( 69) 00:08:44.564 10284.111 - 10334.523: 89.3988% ( 73) 00:08:44.564 10334.523 - 10384.935: 89.8262% ( 61) 00:08:44.564 10384.935 - 10435.348: 90.2536% ( 61) 00:08:44.564 10435.348 - 10485.760: 90.6460% ( 56) 00:08:44.564 10485.760 - 10536.172: 90.8842% ( 34) 00:08:44.564 10536.172 - 10586.585: 91.0945% ( 30) 00:08:44.564 10586.585 - 10636.997: 91.3327% ( 34) 00:08:44.564 10636.997 - 10687.409: 91.5078% ( 25) 00:08:44.564 10687.409 - 10737.822: 91.7251% ( 31) 00:08:44.564 10737.822 - 10788.234: 92.0754% ( 50) 00:08:44.564 10788.234 - 10838.646: 92.3066% ( 33) 00:08:44.564 10838.646 - 10889.058: 92.5238% ( 31) 00:08:44.564 10889.058 - 10939.471: 92.6850% ( 23) 00:08:44.564 10939.471 - 10989.883: 92.8812% ( 28) 00:08:44.564 10989.883 - 11040.295: 93.0703% ( 27) 00:08:44.564 11040.295 - 11090.708: 93.2946% ( 32) 00:08:44.564 11090.708 - 11141.120: 93.5118% ( 31) 00:08:44.564 11141.120 - 11191.532: 93.6729% ( 23) 00:08:44.564 11191.532 - 11241.945: 93.7710% ( 14) 00:08:44.564 11241.945 - 11292.357: 93.8691% ( 14) 00:08:44.564 11292.357 - 11342.769: 93.9672% ( 14) 00:08:44.564 11342.769 - 11393.182: 94.0513% ( 12) 00:08:44.564 11393.182 - 11443.594: 94.1844% ( 19) 00:08:44.564 11443.594 - 11494.006: 94.3175% ( 19) 00:08:44.564 11494.006 - 11544.418: 94.4647% ( 21) 00:08:44.564 11544.418 - 11594.831: 94.5628% ( 14) 00:08:44.564 11594.831 - 11645.243: 94.6328% ( 10) 00:08:44.564 11645.243 - 11695.655: 94.6819% ( 7) 00:08:44.564 11695.655 - 11746.068: 94.7450% ( 9) 00:08:44.564 11746.068 - 11796.480: 94.7870% ( 6) 00:08:44.564 11796.480 - 11846.892: 94.8430% ( 8) 00:08:44.564 11846.892 - 11897.305: 94.8991% ( 8) 00:08:44.564 11897.305 - 11947.717: 94.9622% ( 9) 00:08:44.564 11947.717 - 11998.129: 95.0182% ( 8) 00:08:44.565 11998.129 - 12048.542: 95.0603% ( 6) 00:08:44.565 12048.542 - 12098.954: 95.0673% ( 1) 00:08:44.565 12300.603 - 12351.015: 95.0883% ( 3) 00:08:44.565 12351.015 - 12401.428: 95.1093% ( 3) 00:08:44.565 12401.428 - 12451.840: 95.1233% ( 2) 00:08:44.565 12451.840 - 12502.252: 95.1303% ( 1) 00:08:44.565 12502.252 - 12552.665: 95.1373% ( 1) 00:08:44.565 12552.665 - 12603.077: 95.1513% ( 2) 00:08:44.565 12603.077 - 12653.489: 95.1654% ( 2) 00:08:44.565 12653.489 - 12703.902: 95.1724% ( 1) 00:08:44.565 12703.902 - 12754.314: 95.1864% ( 2) 00:08:44.565 12754.314 - 12804.726: 95.2004% ( 2) 00:08:44.565 12804.726 - 12855.138: 95.2074% ( 1) 00:08:44.565 12855.138 - 12905.551: 95.2284% ( 3) 00:08:44.565 12905.551 - 13006.375: 95.2564% ( 4) 00:08:44.565 13006.375 - 13107.200: 95.3826% ( 18) 00:08:44.565 13107.200 - 13208.025: 95.6208% ( 34) 00:08:44.565 13208.025 - 13308.849: 95.8941% ( 39) 00:08:44.565 13308.849 - 13409.674: 96.1673% ( 39) 00:08:44.565 13409.674 - 13510.498: 96.4756% ( 44) 00:08:44.565 13510.498 - 13611.323: 96.6928% ( 31) 00:08:44.565 13611.323 - 13712.148: 96.9170% ( 32) 00:08:44.565 13712.148 - 13812.972: 97.0642% ( 21) 00:08:44.565 13812.972 - 13913.797: 97.1903% ( 18) 00:08:44.565 13913.797 - 14014.622: 97.2954% ( 15) 00:08:44.565 14014.622 - 14115.446: 97.4986% ( 29) 00:08:44.565 14115.446 - 14216.271: 97.8139% ( 45) 00:08:44.565 14216.271 - 14317.095: 98.0311% ( 31) 00:08:44.565 14317.095 - 14417.920: 98.3674% ( 48) 00:08:44.565 14417.920 - 14518.745: 98.7248% ( 51) 00:08:44.565 14518.745 - 14619.569: 98.8229% ( 14) 00:08:44.565 14619.569 - 14720.394: 98.8999% ( 11) 00:08:44.565 14720.394 - 14821.218: 98.9280% ( 4) 00:08:44.565 14821.218 - 14922.043: 98.9490% ( 3) 00:08:44.565 14922.043 - 15022.868: 98.9770% ( 4) 00:08:44.565 15022.868 - 15123.692: 98.9980% ( 3) 00:08:44.565 15123.692 - 15224.517: 99.0191% ( 3) 00:08:44.565 15224.517 - 15325.342: 99.0471% ( 4) 00:08:44.565 15325.342 - 15426.166: 99.0681% ( 3) 00:08:44.565 15426.166 - 15526.991: 99.0891% ( 3) 00:08:44.565 15526.991 - 15627.815: 99.0961% ( 1) 00:08:44.565 15627.815 - 15728.640: 99.1031% ( 1) 00:08:44.565 21878.942 - 21979.766: 99.1592% ( 8) 00:08:44.565 21979.766 - 22080.591: 99.2783% ( 17) 00:08:44.565 22080.591 - 22181.415: 99.2993% ( 3) 00:08:44.565 22181.415 - 22282.240: 99.3274% ( 4) 00:08:44.565 22282.240 - 22383.065: 99.3484% ( 3) 00:08:44.565 22383.065 - 22483.889: 99.3694% ( 3) 00:08:44.565 22483.889 - 22584.714: 99.3904% ( 3) 00:08:44.565 22584.714 - 22685.538: 99.4044% ( 2) 00:08:44.565 22685.538 - 22786.363: 99.4254% ( 3) 00:08:44.565 22786.363 - 22887.188: 99.4465% ( 3) 00:08:44.565 22887.188 - 22988.012: 99.4675% ( 3) 00:08:44.565 22988.012 - 23088.837: 99.4955% ( 4) 00:08:44.565 23088.837 - 23189.662: 99.5235% ( 4) 00:08:44.565 23189.662 - 23290.486: 99.5516% ( 4) 00:08:44.565 27222.646 - 27424.295: 99.5936% ( 6) 00:08:44.565 27424.295 - 27625.945: 99.6427% ( 7) 00:08:44.565 27625.945 - 27827.594: 99.6987% ( 8) 00:08:44.565 27827.594 - 28029.243: 99.7408% ( 6) 00:08:44.565 28029.243 - 28230.892: 99.7968% ( 8) 00:08:44.565 28230.892 - 28432.542: 99.8459% ( 7) 00:08:44.565 28432.542 - 28634.191: 99.9019% ( 8) 00:08:44.565 28634.191 - 28835.840: 99.9510% ( 7) 00:08:44.565 28835.840 - 29037.489: 100.0000% ( 7) 00:08:44.565 00:08:44.565 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:44.565 ============================================================================== 00:08:44.565 Range in us Cumulative IO count 00:08:44.565 5822.622 - 5847.828: 0.0070% ( 1) 00:08:44.565 5873.034 - 5898.240: 0.0140% ( 1) 00:08:44.565 5898.240 - 5923.446: 0.0280% ( 2) 00:08:44.565 5923.446 - 5948.652: 0.0701% ( 6) 00:08:44.565 5948.652 - 5973.858: 0.1191% ( 7) 00:08:44.565 5973.858 - 5999.065: 0.1822% ( 9) 00:08:44.565 5999.065 - 6024.271: 0.2522% ( 10) 00:08:44.565 6024.271 - 6049.477: 0.3573% ( 15) 00:08:44.565 6049.477 - 6074.683: 0.4414% ( 12) 00:08:44.565 6074.683 - 6099.889: 0.6096% ( 24) 00:08:44.565 6099.889 - 6125.095: 0.7918% ( 26) 00:08:44.565 6125.095 - 6150.302: 0.8899% ( 14) 00:08:44.565 6150.302 - 6175.508: 1.1141% ( 32) 00:08:44.565 6175.508 - 6200.714: 1.2542% ( 20) 00:08:44.565 6200.714 - 6225.920: 1.4714% ( 31) 00:08:44.565 6225.920 - 6251.126: 1.6606% ( 27) 00:08:44.565 6251.126 - 6276.332: 1.8988% ( 34) 00:08:44.565 6276.332 - 6301.538: 2.1651% ( 38) 00:08:44.565 6301.538 - 6326.745: 2.4874% ( 46) 00:08:44.565 6326.745 - 6351.951: 2.7747% ( 41) 00:08:44.565 6351.951 - 6377.157: 3.0830% ( 44) 00:08:44.565 6377.157 - 6402.363: 3.3072% ( 32) 00:08:44.565 6402.363 - 6427.569: 3.5174% ( 30) 00:08:44.565 6427.569 - 6452.775: 3.7626% ( 35) 00:08:44.565 6452.775 - 6503.188: 4.4212% ( 94) 00:08:44.565 6503.188 - 6553.600: 4.9608% ( 77) 00:08:44.565 6553.600 - 6604.012: 5.4372% ( 68) 00:08:44.565 6604.012 - 6654.425: 5.6965% ( 37) 00:08:44.565 6654.425 - 6704.837: 6.0258% ( 47) 00:08:44.565 6704.837 - 6755.249: 6.1869% ( 23) 00:08:44.565 6755.249 - 6805.662: 6.3131% ( 18) 00:08:44.565 6805.662 - 6856.074: 6.5373% ( 32) 00:08:44.565 6856.074 - 6906.486: 6.7405% ( 29) 00:08:44.565 6906.486 - 6956.898: 7.1258% ( 55) 00:08:44.565 6956.898 - 7007.311: 7.5042% ( 54) 00:08:44.565 7007.311 - 7057.723: 7.9456% ( 63) 00:08:44.565 7057.723 - 7108.135: 8.5202% ( 82) 00:08:44.565 7108.135 - 7158.548: 8.8915% ( 53) 00:08:44.565 7158.548 - 7208.960: 9.2419% ( 50) 00:08:44.565 7208.960 - 7259.372: 9.5782% ( 48) 00:08:44.565 7259.372 - 7309.785: 10.1738% ( 85) 00:08:44.565 7309.785 - 7360.197: 10.4330% ( 37) 00:08:44.565 7360.197 - 7410.609: 10.8955% ( 66) 00:08:44.565 7410.609 - 7461.022: 11.2878% ( 56) 00:08:44.565 7461.022 - 7511.434: 11.7363% ( 64) 00:08:44.565 7511.434 - 7561.846: 12.2057% ( 67) 00:08:44.565 7561.846 - 7612.258: 12.7803% ( 82) 00:08:44.565 7612.258 - 7662.671: 13.5510% ( 110) 00:08:44.565 7662.671 - 7713.083: 14.4058% ( 122) 00:08:44.565 7713.083 - 7763.495: 15.3728% ( 138) 00:08:44.565 7763.495 - 7813.908: 16.9773% ( 229) 00:08:44.565 7813.908 - 7864.320: 18.6659% ( 241) 00:08:44.565 7864.320 - 7914.732: 20.2214% ( 222) 00:08:44.565 7914.732 - 7965.145: 22.3234% ( 300) 00:08:44.565 7965.145 - 8015.557: 24.5796% ( 322) 00:08:44.565 8015.557 - 8065.969: 26.7307% ( 307) 00:08:44.565 8065.969 - 8116.382: 29.1620% ( 347) 00:08:44.565 8116.382 - 8166.794: 31.6354% ( 353) 00:08:44.565 8166.794 - 8217.206: 34.3189% ( 383) 00:08:44.565 8217.206 - 8267.618: 36.8554% ( 362) 00:08:44.565 8267.618 - 8318.031: 39.3428% ( 355) 00:08:44.565 8318.031 - 8368.443: 42.1104% ( 395) 00:08:44.565 8368.443 - 8418.855: 45.0953% ( 426) 00:08:44.565 8418.855 - 8469.268: 47.6808% ( 369) 00:08:44.565 8469.268 - 8519.680: 49.9510% ( 324) 00:08:44.565 8519.680 - 8570.092: 52.2001% ( 321) 00:08:44.565 8570.092 - 8620.505: 54.3932% ( 313) 00:08:44.565 8620.505 - 8670.917: 56.3411% ( 278) 00:08:44.565 8670.917 - 8721.329: 58.0858% ( 249) 00:08:44.565 8721.329 - 8771.742: 59.6973% ( 230) 00:08:44.565 8771.742 - 8822.154: 61.2178% ( 217) 00:08:44.565 8822.154 - 8872.566: 62.8643% ( 235) 00:08:44.565 8872.566 - 8922.978: 64.3077% ( 206) 00:08:44.565 8922.978 - 8973.391: 65.7581% ( 207) 00:08:44.565 8973.391 - 9023.803: 67.2856% ( 218) 00:08:44.565 9023.803 - 9074.215: 68.4978% ( 173) 00:08:44.565 9074.215 - 9124.628: 69.5908% ( 156) 00:08:44.565 9124.628 - 9175.040: 70.5858% ( 142) 00:08:44.565 9175.040 - 9225.452: 71.7489% ( 166) 00:08:44.565 9225.452 - 9275.865: 72.9260% ( 168) 00:08:44.565 9275.865 - 9326.277: 74.0891% ( 166) 00:08:44.565 9326.277 - 9376.689: 74.9439% ( 122) 00:08:44.565 9376.689 - 9427.102: 75.8478% ( 129) 00:08:44.565 9427.102 - 9477.514: 76.7657% ( 131) 00:08:44.565 9477.514 - 9527.926: 77.7677% ( 143) 00:08:44.565 9527.926 - 9578.338: 78.6925% ( 132) 00:08:44.565 9578.338 - 9628.751: 79.6735% ( 140) 00:08:44.565 9628.751 - 9679.163: 80.6264% ( 136) 00:08:44.565 9679.163 - 9729.575: 81.5443% ( 131) 00:08:44.565 9729.575 - 9779.988: 82.5042% ( 137) 00:08:44.565 9779.988 - 9830.400: 83.3240% ( 117) 00:08:44.565 9830.400 - 9880.812: 84.0877% ( 109) 00:08:44.565 9880.812 - 9931.225: 84.7744% ( 98) 00:08:44.565 9931.225 - 9981.637: 85.3700% ( 85) 00:08:44.565 9981.637 - 10032.049: 86.0706% ( 100) 00:08:44.565 10032.049 - 10082.462: 86.7152% ( 92) 00:08:44.565 10082.462 - 10132.874: 87.2618% ( 78) 00:08:44.565 10132.874 - 10183.286: 87.7943% ( 76) 00:08:44.565 10183.286 - 10233.698: 88.3128% ( 74) 00:08:44.565 10233.698 - 10284.111: 88.7892% ( 68) 00:08:44.565 10284.111 - 10334.523: 89.2166% ( 61) 00:08:44.565 10334.523 - 10384.935: 89.5740% ( 51) 00:08:44.565 10384.935 - 10435.348: 89.8893% ( 45) 00:08:44.565 10435.348 - 10485.760: 90.2116% ( 46) 00:08:44.565 10485.760 - 10536.172: 90.4428% ( 33) 00:08:44.565 10536.172 - 10586.585: 90.6881% ( 35) 00:08:44.565 10586.585 - 10636.997: 91.0594% ( 53) 00:08:44.565 10636.997 - 10687.409: 91.3117% ( 36) 00:08:44.565 10687.409 - 10737.822: 91.6270% ( 45) 00:08:44.565 10737.822 - 10788.234: 91.8932% ( 38) 00:08:44.565 10788.234 - 10838.646: 92.1665% ( 39) 00:08:44.565 10838.646 - 10889.058: 92.4538% ( 41) 00:08:44.565 10889.058 - 10939.471: 92.6640% ( 30) 00:08:44.565 10939.471 - 10989.883: 92.8531% ( 27) 00:08:44.565 10989.883 - 11040.295: 93.0493% ( 28) 00:08:44.565 11040.295 - 11090.708: 93.2175% ( 24) 00:08:44.565 11090.708 - 11141.120: 93.3646% ( 21) 00:08:44.565 11141.120 - 11191.532: 93.5188% ( 22) 00:08:44.565 11191.532 - 11241.945: 93.6729% ( 22) 00:08:44.565 11241.945 - 11292.357: 93.8271% ( 22) 00:08:44.566 11292.357 - 11342.769: 94.0163% ( 27) 00:08:44.566 11342.769 - 11393.182: 94.2405% ( 32) 00:08:44.566 11393.182 - 11443.594: 94.3456% ( 15) 00:08:44.566 11443.594 - 11494.006: 94.4577% ( 16) 00:08:44.566 11494.006 - 11544.418: 94.5488% ( 13) 00:08:44.566 11544.418 - 11594.831: 94.6679% ( 17) 00:08:44.566 11594.831 - 11645.243: 94.7940% ( 18) 00:08:44.566 11645.243 - 11695.655: 94.8571% ( 9) 00:08:44.566 11695.655 - 11746.068: 94.8991% ( 6) 00:08:44.566 11746.068 - 11796.480: 94.9341% ( 5) 00:08:44.566 11796.480 - 11846.892: 94.9692% ( 5) 00:08:44.566 11846.892 - 11897.305: 95.0112% ( 6) 00:08:44.566 11897.305 - 11947.717: 95.0322% ( 3) 00:08:44.566 11947.717 - 11998.129: 95.0533% ( 3) 00:08:44.566 11998.129 - 12048.542: 95.0813% ( 4) 00:08:44.566 12048.542 - 12098.954: 95.1023% ( 3) 00:08:44.566 12098.954 - 12149.366: 95.1093% ( 1) 00:08:44.566 12149.366 - 12199.778: 95.1163% ( 1) 00:08:44.566 12199.778 - 12250.191: 95.1303% ( 2) 00:08:44.566 12250.191 - 12300.603: 95.1373% ( 1) 00:08:44.566 12300.603 - 12351.015: 95.1443% ( 1) 00:08:44.566 12351.015 - 12401.428: 95.1584% ( 2) 00:08:44.566 12401.428 - 12451.840: 95.1654% ( 1) 00:08:44.566 12451.840 - 12502.252: 95.1794% ( 2) 00:08:44.566 12502.252 - 12552.665: 95.1864% ( 1) 00:08:44.566 12552.665 - 12603.077: 95.2004% ( 2) 00:08:44.566 12603.077 - 12653.489: 95.2074% ( 1) 00:08:44.566 12653.489 - 12703.902: 95.2214% ( 2) 00:08:44.566 12703.902 - 12754.314: 95.2424% ( 3) 00:08:44.566 12754.314 - 12804.726: 95.2775% ( 5) 00:08:44.566 12804.726 - 12855.138: 95.3055% ( 4) 00:08:44.566 12855.138 - 12905.551: 95.3545% ( 7) 00:08:44.566 12905.551 - 13006.375: 95.4737% ( 17) 00:08:44.566 13006.375 - 13107.200: 95.5928% ( 17) 00:08:44.566 13107.200 - 13208.025: 95.6698% ( 11) 00:08:44.566 13208.025 - 13308.849: 95.8170% ( 21) 00:08:44.566 13308.849 - 13409.674: 95.9851% ( 24) 00:08:44.566 13409.674 - 13510.498: 96.2514% ( 38) 00:08:44.566 13510.498 - 13611.323: 96.5667% ( 45) 00:08:44.566 13611.323 - 13712.148: 96.7629% ( 28) 00:08:44.566 13712.148 - 13812.972: 97.0362% ( 39) 00:08:44.566 13812.972 - 13913.797: 97.1763% ( 20) 00:08:44.566 13913.797 - 14014.622: 97.3164% ( 20) 00:08:44.566 14014.622 - 14115.446: 97.4776% ( 23) 00:08:44.566 14115.446 - 14216.271: 97.6317% ( 22) 00:08:44.566 14216.271 - 14317.095: 97.8559% ( 32) 00:08:44.566 14317.095 - 14417.920: 98.2974% ( 63) 00:08:44.566 14417.920 - 14518.745: 98.5286% ( 33) 00:08:44.566 14518.745 - 14619.569: 98.8229% ( 42) 00:08:44.566 14619.569 - 14720.394: 98.9910% ( 24) 00:08:44.566 14720.394 - 14821.218: 99.0751% ( 12) 00:08:44.566 14821.218 - 14922.043: 99.0961% ( 3) 00:08:44.566 14922.043 - 15022.868: 99.1031% ( 1) 00:08:44.566 20669.046 - 20769.871: 99.1382% ( 5) 00:08:44.566 20769.871 - 20870.695: 99.1872% ( 7) 00:08:44.566 20870.695 - 20971.520: 99.2223% ( 5) 00:08:44.566 20971.520 - 21072.345: 99.2573% ( 5) 00:08:44.566 21072.345 - 21173.169: 99.2853% ( 4) 00:08:44.566 21173.169 - 21273.994: 99.3063% ( 3) 00:08:44.566 21273.994 - 21374.818: 99.3344% ( 4) 00:08:44.566 21374.818 - 21475.643: 99.3554% ( 3) 00:08:44.566 21475.643 - 21576.468: 99.3834% ( 4) 00:08:44.566 21576.468 - 21677.292: 99.4114% ( 4) 00:08:44.566 21677.292 - 21778.117: 99.4325% ( 3) 00:08:44.566 21778.117 - 21878.942: 99.4535% ( 3) 00:08:44.566 21878.942 - 21979.766: 99.4815% ( 4) 00:08:44.566 21979.766 - 22080.591: 99.5025% ( 3) 00:08:44.566 22080.591 - 22181.415: 99.5305% ( 4) 00:08:44.566 22181.415 - 22282.240: 99.5516% ( 3) 00:08:44.566 24802.855 - 24903.680: 99.5656% ( 2) 00:08:44.566 24903.680 - 25004.505: 99.5866% ( 3) 00:08:44.566 25004.505 - 25105.329: 99.6076% ( 3) 00:08:44.566 25105.329 - 25206.154: 99.6567% ( 7) 00:08:44.566 25206.154 - 25306.978: 99.7337% ( 11) 00:08:44.566 25306.978 - 25407.803: 99.7688% ( 5) 00:08:44.566 25407.803 - 25508.628: 99.7898% ( 3) 00:08:44.566 26214.400 - 26416.049: 99.8178% ( 4) 00:08:44.566 26416.049 - 26617.698: 99.8599% ( 6) 00:08:44.566 26617.698 - 26819.348: 99.9089% ( 7) 00:08:44.566 26819.348 - 27020.997: 99.9790% ( 10) 00:08:44.566 27020.997 - 27222.646: 100.0000% ( 3) 00:08:44.566 00:08:44.566 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:44.566 ============================================================================== 00:08:44.566 Range in us Cumulative IO count 00:08:44.566 5772.209 - 5797.415: 0.0070% ( 1) 00:08:44.566 5923.446 - 5948.652: 0.0210% ( 2) 00:08:44.566 5948.652 - 5973.858: 0.0841% ( 9) 00:08:44.566 5973.858 - 5999.065: 0.1191% ( 5) 00:08:44.566 5999.065 - 6024.271: 0.2032% ( 12) 00:08:44.566 6024.271 - 6049.477: 0.2803% ( 11) 00:08:44.566 6049.477 - 6074.683: 0.3854% ( 15) 00:08:44.566 6074.683 - 6099.889: 0.6937% ( 44) 00:08:44.566 6099.889 - 6125.095: 0.7707% ( 11) 00:08:44.566 6125.095 - 6150.302: 0.8828% ( 16) 00:08:44.566 6150.302 - 6175.508: 0.9809% ( 14) 00:08:44.566 6175.508 - 6200.714: 1.0930% ( 16) 00:08:44.566 6200.714 - 6225.920: 1.1982% ( 15) 00:08:44.566 6225.920 - 6251.126: 1.3243% ( 18) 00:08:44.566 6251.126 - 6276.332: 1.6396% ( 45) 00:08:44.566 6276.332 - 6301.538: 2.0320% ( 56) 00:08:44.566 6301.538 - 6326.745: 2.1371% ( 15) 00:08:44.566 6326.745 - 6351.951: 2.2842% ( 21) 00:08:44.566 6351.951 - 6377.157: 2.4874% ( 29) 00:08:44.566 6377.157 - 6402.363: 2.6766% ( 27) 00:08:44.566 6402.363 - 6427.569: 3.2791% ( 86) 00:08:44.566 6427.569 - 6452.775: 3.6925% ( 59) 00:08:44.566 6452.775 - 6503.188: 4.5404% ( 121) 00:08:44.566 6503.188 - 6553.600: 4.9958% ( 65) 00:08:44.566 6553.600 - 6604.012: 5.3952% ( 57) 00:08:44.566 6604.012 - 6654.425: 5.7175% ( 46) 00:08:44.566 6654.425 - 6704.837: 5.8857% ( 24) 00:08:44.566 6704.837 - 6755.249: 6.2220% ( 48) 00:08:44.566 6755.249 - 6805.662: 6.3971% ( 25) 00:08:44.566 6805.662 - 6856.074: 6.6284% ( 33) 00:08:44.566 6856.074 - 6906.486: 6.9367% ( 44) 00:08:44.566 6906.486 - 6956.898: 7.5392% ( 86) 00:08:44.566 6956.898 - 7007.311: 7.9106% ( 53) 00:08:44.566 7007.311 - 7057.723: 8.6113% ( 100) 00:08:44.566 7057.723 - 7108.135: 9.0807% ( 67) 00:08:44.566 7108.135 - 7158.548: 9.3540% ( 39) 00:08:44.566 7158.548 - 7208.960: 9.6132% ( 37) 00:08:44.566 7208.960 - 7259.372: 9.8234% ( 30) 00:08:44.566 7259.372 - 7309.785: 10.2859% ( 66) 00:08:44.566 7309.785 - 7360.197: 10.4961% ( 30) 00:08:44.566 7360.197 - 7410.609: 10.6292% ( 19) 00:08:44.566 7410.609 - 7461.022: 10.8254% ( 28) 00:08:44.566 7461.022 - 7511.434: 11.1407% ( 45) 00:08:44.566 7511.434 - 7561.846: 11.6942% ( 79) 00:08:44.566 7561.846 - 7612.258: 12.2197% ( 75) 00:08:44.566 7612.258 - 7662.671: 12.8223% ( 86) 00:08:44.566 7662.671 - 7713.083: 13.6421% ( 117) 00:08:44.566 7713.083 - 7763.495: 14.6441% ( 143) 00:08:44.566 7763.495 - 7813.908: 16.0034% ( 194) 00:08:44.566 7813.908 - 7864.320: 18.0914% ( 298) 00:08:44.566 7864.320 - 7914.732: 20.2985% ( 315) 00:08:44.566 7914.732 - 7965.145: 22.5617% ( 323) 00:08:44.566 7965.145 - 8015.557: 24.9159% ( 336) 00:08:44.566 8015.557 - 8065.969: 27.5504% ( 376) 00:08:44.566 8065.969 - 8116.382: 30.0168% ( 352) 00:08:44.566 8116.382 - 8166.794: 32.6163% ( 371) 00:08:44.566 8166.794 - 8217.206: 34.8024% ( 312) 00:08:44.566 8217.206 - 8267.618: 37.2057% ( 343) 00:08:44.566 8267.618 - 8318.031: 39.5530% ( 335) 00:08:44.566 8318.031 - 8368.443: 42.3487% ( 399) 00:08:44.566 8368.443 - 8418.855: 44.7800% ( 347) 00:08:44.566 8418.855 - 8469.268: 47.2674% ( 355) 00:08:44.566 8469.268 - 8519.680: 49.9159% ( 378) 00:08:44.566 8519.680 - 8570.092: 52.2141% ( 328) 00:08:44.566 8570.092 - 8620.505: 54.5544% ( 334) 00:08:44.566 8620.505 - 8670.917: 56.6003% ( 292) 00:08:44.566 8670.917 - 8721.329: 58.4781% ( 268) 00:08:44.566 8721.329 - 8771.742: 60.5171% ( 291) 00:08:44.566 8771.742 - 8822.154: 62.2337% ( 245) 00:08:44.566 8822.154 - 8872.566: 63.7612% ( 218) 00:08:44.566 8872.566 - 8922.978: 65.1766% ( 202) 00:08:44.566 8922.978 - 8973.391: 66.6690% ( 213) 00:08:44.566 8973.391 - 9023.803: 67.8251% ( 165) 00:08:44.566 9023.803 - 9074.215: 69.1774% ( 193) 00:08:44.566 9074.215 - 9124.628: 70.2635% ( 155) 00:08:44.566 9124.628 - 9175.040: 71.1603% ( 128) 00:08:44.566 9175.040 - 9225.452: 72.1413% ( 140) 00:08:44.566 9225.452 - 9275.865: 73.3114% ( 167) 00:08:44.566 9275.865 - 9326.277: 74.3274% ( 145) 00:08:44.566 9326.277 - 9376.689: 75.5746% ( 178) 00:08:44.566 9376.689 - 9427.102: 76.6115% ( 148) 00:08:44.566 9427.102 - 9477.514: 77.7186% ( 158) 00:08:44.566 9477.514 - 9527.926: 78.6225% ( 129) 00:08:44.566 9527.926 - 9578.338: 79.4212% ( 114) 00:08:44.566 9578.338 - 9628.751: 80.1359% ( 102) 00:08:44.566 9628.751 - 9679.163: 80.8226% ( 98) 00:08:44.566 9679.163 - 9729.575: 81.5723% ( 107) 00:08:44.566 9729.575 - 9779.988: 82.3571% ( 112) 00:08:44.566 9779.988 - 9830.400: 83.0437% ( 98) 00:08:44.566 9830.400 - 9880.812: 83.7794% ( 105) 00:08:44.566 9880.812 - 9931.225: 84.4311% ( 93) 00:08:44.566 9931.225 - 9981.637: 85.2158% ( 112) 00:08:44.566 9981.637 - 10032.049: 85.9095% ( 99) 00:08:44.566 10032.049 - 10082.462: 86.4280% ( 74) 00:08:44.566 10082.462 - 10132.874: 86.9535% ( 75) 00:08:44.566 10132.874 - 10183.286: 87.3879% ( 62) 00:08:44.566 10183.286 - 10233.698: 87.9204% ( 76) 00:08:44.566 10233.698 - 10284.111: 88.3828% ( 66) 00:08:44.566 10284.111 - 10334.523: 88.7892% ( 58) 00:08:44.566 10334.523 - 10384.935: 89.1256% ( 48) 00:08:44.566 10384.935 - 10435.348: 89.5109% ( 55) 00:08:44.566 10435.348 - 10485.760: 89.7912% ( 40) 00:08:44.566 10485.760 - 10536.172: 90.1135% ( 46) 00:08:44.566 10536.172 - 10586.585: 90.3938% ( 40) 00:08:44.567 10586.585 - 10636.997: 90.6670% ( 39) 00:08:44.567 10636.997 - 10687.409: 90.8913% ( 32) 00:08:44.567 10687.409 - 10737.822: 91.2416% ( 50) 00:08:44.567 10737.822 - 10788.234: 91.4728% ( 33) 00:08:44.567 10788.234 - 10838.646: 91.7601% ( 41) 00:08:44.567 10838.646 - 10889.058: 92.0053% ( 35) 00:08:44.567 10889.058 - 10939.471: 92.2926% ( 41) 00:08:44.567 10939.471 - 10989.883: 92.6079% ( 45) 00:08:44.567 10989.883 - 11040.295: 92.8041% ( 28) 00:08:44.567 11040.295 - 11090.708: 92.9372% ( 19) 00:08:44.567 11090.708 - 11141.120: 93.0633% ( 18) 00:08:44.567 11141.120 - 11191.532: 93.2035% ( 20) 00:08:44.567 11191.532 - 11241.945: 93.3716% ( 24) 00:08:44.567 11241.945 - 11292.357: 93.4908% ( 17) 00:08:44.567 11292.357 - 11342.769: 93.5678% ( 11) 00:08:44.567 11342.769 - 11393.182: 93.6799% ( 16) 00:08:44.567 11393.182 - 11443.594: 93.7990% ( 17) 00:08:44.567 11443.594 - 11494.006: 93.9041% ( 15) 00:08:44.567 11494.006 - 11544.418: 93.9952% ( 13) 00:08:44.567 11544.418 - 11594.831: 94.1214% ( 18) 00:08:44.567 11594.831 - 11645.243: 94.2054% ( 12) 00:08:44.567 11645.243 - 11695.655: 94.2755% ( 10) 00:08:44.567 11695.655 - 11746.068: 94.3526% ( 11) 00:08:44.567 11746.068 - 11796.480: 94.4086% ( 8) 00:08:44.567 11796.480 - 11846.892: 94.4787% ( 10) 00:08:44.567 11846.892 - 11897.305: 94.6399% ( 23) 00:08:44.567 11897.305 - 11947.717: 94.7309% ( 13) 00:08:44.567 11947.717 - 11998.129: 94.8080% ( 11) 00:08:44.567 11998.129 - 12048.542: 94.8921% ( 12) 00:08:44.567 12048.542 - 12098.954: 94.9411% ( 7) 00:08:44.567 12098.954 - 12149.366: 94.9902% ( 7) 00:08:44.567 12149.366 - 12199.778: 95.0673% ( 11) 00:08:44.567 12199.778 - 12250.191: 95.1023% ( 5) 00:08:44.567 12250.191 - 12300.603: 95.1163% ( 2) 00:08:44.567 12300.603 - 12351.015: 95.1303% ( 2) 00:08:44.567 12351.015 - 12401.428: 95.1513% ( 3) 00:08:44.567 12401.428 - 12451.840: 95.1584% ( 1) 00:08:44.567 12451.840 - 12502.252: 95.1654% ( 1) 00:08:44.567 12502.252 - 12552.665: 95.1864% ( 3) 00:08:44.567 12552.665 - 12603.077: 95.2144% ( 4) 00:08:44.567 12603.077 - 12653.489: 95.3125% ( 14) 00:08:44.567 12653.489 - 12703.902: 95.4176% ( 15) 00:08:44.567 12703.902 - 12754.314: 95.4737% ( 8) 00:08:44.567 12754.314 - 12804.726: 95.5087% ( 5) 00:08:44.567 12804.726 - 12855.138: 95.5647% ( 8) 00:08:44.567 12855.138 - 12905.551: 95.7189% ( 22) 00:08:44.567 12905.551 - 13006.375: 95.9151% ( 28) 00:08:44.567 13006.375 - 13107.200: 96.0692% ( 22) 00:08:44.567 13107.200 - 13208.025: 96.2164% ( 21) 00:08:44.567 13208.025 - 13308.849: 96.3565% ( 20) 00:08:44.567 13308.849 - 13409.674: 96.5387% ( 26) 00:08:44.567 13409.674 - 13510.498: 96.7209% ( 26) 00:08:44.567 13510.498 - 13611.323: 96.9311% ( 30) 00:08:44.567 13611.323 - 13712.148: 97.1272% ( 28) 00:08:44.567 13712.148 - 13812.972: 97.3164% ( 27) 00:08:44.567 13812.972 - 13913.797: 97.4916% ( 25) 00:08:44.567 13913.797 - 14014.622: 97.6457% ( 22) 00:08:44.567 14014.622 - 14115.446: 97.7649% ( 17) 00:08:44.567 14115.446 - 14216.271: 97.8770% ( 16) 00:08:44.567 14216.271 - 14317.095: 97.9961% ( 17) 00:08:44.567 14317.095 - 14417.920: 98.1222% ( 18) 00:08:44.567 14417.920 - 14518.745: 98.2133% ( 13) 00:08:44.567 14518.745 - 14619.569: 98.3184% ( 15) 00:08:44.567 14619.569 - 14720.394: 98.3885% ( 10) 00:08:44.567 14720.394 - 14821.218: 98.4515% ( 9) 00:08:44.567 14821.218 - 14922.043: 98.5216% ( 10) 00:08:44.567 14922.043 - 15022.868: 98.5987% ( 11) 00:08:44.567 15022.868 - 15123.692: 98.7668% ( 24) 00:08:44.567 15123.692 - 15224.517: 98.8999% ( 19) 00:08:44.567 15224.517 - 15325.342: 99.0121% ( 16) 00:08:44.567 15325.342 - 15426.166: 99.0821% ( 10) 00:08:44.567 15426.166 - 15526.991: 99.1031% ( 3) 00:08:44.567 18652.554 - 18753.378: 99.1101% ( 1) 00:08:44.567 19156.677 - 19257.502: 99.1312% ( 3) 00:08:44.567 19257.502 - 19358.326: 99.1662% ( 5) 00:08:44.567 19358.326 - 19459.151: 99.2012% ( 5) 00:08:44.567 19459.151 - 19559.975: 99.2293% ( 4) 00:08:44.567 19559.975 - 19660.800: 99.2713% ( 6) 00:08:44.567 19660.800 - 19761.625: 99.3063% ( 5) 00:08:44.567 19761.625 - 19862.449: 99.3274% ( 3) 00:08:44.567 19862.449 - 19963.274: 99.3484% ( 3) 00:08:44.567 19963.274 - 20064.098: 99.3764% ( 4) 00:08:44.567 20064.098 - 20164.923: 99.3974% ( 3) 00:08:44.567 20164.923 - 20265.748: 99.4254% ( 4) 00:08:44.567 20265.748 - 20366.572: 99.4535% ( 4) 00:08:44.567 20366.572 - 20467.397: 99.4745% ( 3) 00:08:44.567 20467.397 - 20568.222: 99.5025% ( 4) 00:08:44.567 20568.222 - 20669.046: 99.5235% ( 3) 00:08:44.567 20669.046 - 20769.871: 99.5516% ( 4) 00:08:44.567 23189.662 - 23290.486: 99.5656% ( 2) 00:08:44.567 23290.486 - 23391.311: 99.5936% ( 4) 00:08:44.567 23391.311 - 23492.135: 99.6216% ( 4) 00:08:44.567 23492.135 - 23592.960: 99.6427% ( 3) 00:08:44.567 23592.960 - 23693.785: 99.6707% ( 4) 00:08:44.567 23693.785 - 23794.609: 99.7618% ( 13) 00:08:44.567 23794.609 - 23895.434: 99.8318% ( 10) 00:08:44.567 23895.434 - 23996.258: 99.8529% ( 3) 00:08:44.567 24702.031 - 24802.855: 99.8599% ( 1) 00:08:44.567 24802.855 - 24903.680: 99.8879% ( 4) 00:08:44.567 24903.680 - 25004.505: 99.9089% ( 3) 00:08:44.567 25004.505 - 25105.329: 99.9369% ( 4) 00:08:44.567 25105.329 - 25206.154: 99.9650% ( 4) 00:08:44.567 25206.154 - 25306.978: 99.9860% ( 3) 00:08:44.567 25306.978 - 25407.803: 100.0000% ( 2) 00:08:44.567 00:08:44.567 21:59:17 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:44.567 00:08:44.567 real 0m2.604s 00:08:44.567 user 0m2.266s 00:08:44.567 sys 0m0.230s 00:08:44.567 21:59:17 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.567 ************************************ 00:08:44.567 END TEST nvme_perf 00:08:44.567 ************************************ 00:08:44.567 21:59:17 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 21:59:17 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:44.567 21:59:17 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:44.567 21:59:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.567 21:59:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 ************************************ 00:08:44.567 START TEST nvme_hello_world 00:08:44.567 ************************************ 00:08:44.567 21:59:17 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:44.567 Initializing NVMe Controllers 00:08:44.567 Attached to 0000:00:10.0 00:08:44.567 Namespace ID: 1 size: 6GB 00:08:44.567 Attached to 0000:00:11.0 00:08:44.567 Namespace ID: 1 size: 5GB 00:08:44.567 Attached to 0000:00:13.0 00:08:44.567 Namespace ID: 1 size: 1GB 00:08:44.567 Attached to 0000:00:12.0 00:08:44.567 Namespace ID: 1 size: 4GB 00:08:44.567 Namespace ID: 2 size: 4GB 00:08:44.567 Namespace ID: 3 size: 4GB 00:08:44.567 Initialization complete. 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.567 INFO: using host memory buffer for IO 00:08:44.567 Hello world! 00:08:44.826 00:08:44.826 real 0m0.207s 00:08:44.826 user 0m0.073s 00:08:44.826 sys 0m0.099s 00:08:44.826 21:59:17 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.826 21:59:17 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:44.826 ************************************ 00:08:44.826 END TEST nvme_hello_world 00:08:44.826 ************************************ 00:08:44.826 21:59:17 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:44.826 21:59:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.826 21:59:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.826 21:59:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.826 ************************************ 00:08:44.826 START TEST nvme_sgl 00:08:44.826 ************************************ 00:08:44.826 21:59:17 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:44.826 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:44.826 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:44.826 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:44.826 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:44.826 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:45.084 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:45.084 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:45.084 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:45.084 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:45.084 NVMe Readv/Writev Request test 00:08:45.085 Attached to 0000:00:10.0 00:08:45.085 Attached to 0000:00:11.0 00:08:45.085 Attached to 0000:00:13.0 00:08:45.085 Attached to 0000:00:12.0 00:08:45.085 0000:00:10.0: build_io_request_2 test passed 00:08:45.085 0000:00:10.0: build_io_request_4 test passed 00:08:45.085 0000:00:10.0: build_io_request_5 test passed 00:08:45.085 0000:00:10.0: build_io_request_6 test passed 00:08:45.085 0000:00:10.0: build_io_request_7 test passed 00:08:45.085 0000:00:10.0: build_io_request_10 test passed 00:08:45.085 0000:00:11.0: build_io_request_2 test passed 00:08:45.085 0000:00:11.0: build_io_request_4 test passed 00:08:45.085 0000:00:11.0: build_io_request_5 test passed 00:08:45.085 0000:00:11.0: build_io_request_6 test passed 00:08:45.085 0000:00:11.0: build_io_request_7 test passed 00:08:45.085 0000:00:11.0: build_io_request_10 test passed 00:08:45.085 Cleaning up... 00:08:45.085 00:08:45.085 real 0m0.284s 00:08:45.085 user 0m0.137s 00:08:45.085 sys 0m0.099s 00:08:45.085 21:59:17 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.085 21:59:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 ************************************ 00:08:45.085 END TEST nvme_sgl 00:08:45.085 ************************************ 00:08:45.085 21:59:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:45.085 21:59:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.085 21:59:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.085 21:59:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 ************************************ 00:08:45.085 START TEST nvme_e2edp 00:08:45.085 ************************************ 00:08:45.085 21:59:17 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:45.345 NVMe Write/Read with End-to-End data protection test 00:08:45.345 Attached to 0000:00:10.0 00:08:45.345 Attached to 0000:00:11.0 00:08:45.345 Attached to 0000:00:13.0 00:08:45.345 Attached to 0000:00:12.0 00:08:45.345 Cleaning up... 00:08:45.345 00:08:45.345 real 0m0.213s 00:08:45.345 user 0m0.073s 00:08:45.345 sys 0m0.095s 00:08:45.345 21:59:18 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.345 21:59:18 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:45.345 ************************************ 00:08:45.345 END TEST nvme_e2edp 00:08:45.345 ************************************ 00:08:45.345 21:59:18 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.345 21:59:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.345 21:59:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.345 21:59:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.345 ************************************ 00:08:45.345 START TEST nvme_reserve 00:08:45.345 ************************************ 00:08:45.345 21:59:18 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.604 ===================================================== 00:08:45.604 NVMe Controller at PCI bus 0, device 16, function 0 00:08:45.604 ===================================================== 00:08:45.604 Reservations: Not Supported 00:08:45.604 ===================================================== 00:08:45.604 NVMe Controller at PCI bus 0, device 17, function 0 00:08:45.604 ===================================================== 00:08:45.604 Reservations: Not Supported 00:08:45.604 ===================================================== 00:08:45.604 NVMe Controller at PCI bus 0, device 19, function 0 00:08:45.604 ===================================================== 00:08:45.604 Reservations: Not Supported 00:08:45.604 ===================================================== 00:08:45.604 NVMe Controller at PCI bus 0, device 18, function 0 00:08:45.604 ===================================================== 00:08:45.604 Reservations: Not Supported 00:08:45.604 Reservation test passed 00:08:45.604 00:08:45.604 real 0m0.215s 00:08:45.604 user 0m0.071s 00:08:45.604 sys 0m0.099s 00:08:45.604 21:59:18 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.604 ************************************ 00:08:45.604 END TEST nvme_reserve 00:08:45.604 21:59:18 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:45.604 ************************************ 00:08:45.604 21:59:18 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:45.604 21:59:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.604 21:59:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.604 21:59:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.604 ************************************ 00:08:45.604 START TEST nvme_err_injection 00:08:45.604 ************************************ 00:08:45.604 21:59:18 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:45.884 NVMe Error Injection test 00:08:45.884 Attached to 0000:00:10.0 00:08:45.884 Attached to 0000:00:11.0 00:08:45.884 Attached to 0000:00:13.0 00:08:45.884 Attached to 0000:00:12.0 00:08:45.884 0000:00:12.0: get features failed as expected 00:08:45.884 0000:00:10.0: get features failed as expected 00:08:45.884 0000:00:11.0: get features failed as expected 00:08:45.884 0000:00:13.0: get features failed as expected 00:08:45.884 0000:00:10.0: get features successfully as expected 00:08:45.884 0000:00:11.0: get features successfully as expected 00:08:45.884 0000:00:13.0: get features successfully as expected 00:08:45.884 0000:00:12.0: get features successfully as expected 00:08:45.884 0000:00:10.0: read failed as expected 00:08:45.884 0000:00:11.0: read failed as expected 00:08:45.884 0000:00:13.0: read failed as expected 00:08:45.884 0000:00:12.0: read failed as expected 00:08:45.884 0000:00:12.0: read successfully as expected 00:08:45.884 0000:00:10.0: read successfully as expected 00:08:45.884 0000:00:11.0: read successfully as expected 00:08:45.884 0000:00:13.0: read successfully as expected 00:08:45.884 Cleaning up... 00:08:45.884 00:08:45.884 real 0m0.219s 00:08:45.884 user 0m0.083s 00:08:45.884 sys 0m0.093s 00:08:45.884 21:59:18 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.884 21:59:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:45.884 ************************************ 00:08:45.884 END TEST nvme_err_injection 00:08:45.884 ************************************ 00:08:45.884 21:59:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:45.884 21:59:18 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:45.884 21:59:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.884 21:59:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.884 ************************************ 00:08:45.884 START TEST nvme_overhead 00:08:45.884 ************************************ 00:08:45.884 21:59:18 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:47.267 Initializing NVMe Controllers 00:08:47.267 Attached to 0000:00:10.0 00:08:47.267 Attached to 0000:00:11.0 00:08:47.267 Attached to 0000:00:13.0 00:08:47.267 Attached to 0000:00:12.0 00:08:47.267 Initialization complete. Launching workers. 00:08:47.267 submit (in ns) avg, min, max = 11668.8, 10624.6, 110571.5 00:08:47.267 complete (in ns) avg, min, max = 8798.0, 7293.1, 70465.4 00:08:47.267 00:08:47.267 Submit histogram 00:08:47.267 ================ 00:08:47.267 Range in us Cumulative Count 00:08:47.267 10.585 - 10.634: 0.0107% ( 1) 00:08:47.267 10.732 - 10.782: 0.0320% ( 2) 00:08:47.267 10.782 - 10.831: 0.0641% ( 3) 00:08:47.267 10.831 - 10.880: 0.1708% ( 10) 00:08:47.267 10.880 - 10.929: 0.5766% ( 38) 00:08:47.267 10.929 - 10.978: 1.7190% ( 107) 00:08:47.267 10.978 - 11.028: 4.2601% ( 238) 00:08:47.267 11.028 - 11.077: 8.5309% ( 400) 00:08:47.267 11.077 - 11.126: 15.2253% ( 627) 00:08:47.267 11.126 - 11.175: 24.0017% ( 822) 00:08:47.267 11.175 - 11.225: 33.3227% ( 873) 00:08:47.267 11.225 - 11.274: 40.9246% ( 712) 00:08:47.267 11.274 - 11.323: 47.4055% ( 607) 00:08:47.267 11.323 - 11.372: 52.4984% ( 477) 00:08:47.267 11.372 - 11.422: 56.6197% ( 386) 00:08:47.267 11.422 - 11.471: 60.5808% ( 371) 00:08:47.267 11.471 - 11.520: 64.5099% ( 368) 00:08:47.267 11.520 - 11.569: 68.8341% ( 405) 00:08:47.267 11.569 - 11.618: 72.6884% ( 361) 00:08:47.267 11.618 - 11.668: 76.4894% ( 356) 00:08:47.267 11.668 - 11.717: 79.8527% ( 315) 00:08:47.267 11.717 - 11.766: 82.6287% ( 260) 00:08:47.267 11.766 - 11.815: 84.8067% ( 204) 00:08:47.267 11.815 - 11.865: 86.5684% ( 165) 00:08:47.267 11.865 - 11.914: 87.9885% ( 133) 00:08:47.267 11.914 - 11.963: 89.0562% ( 100) 00:08:47.267 11.963 - 12.012: 89.9210% ( 81) 00:08:47.267 12.012 - 12.062: 90.6363% ( 67) 00:08:47.267 12.062 - 12.111: 91.2556% ( 58) 00:08:47.267 12.111 - 12.160: 91.7040% ( 42) 00:08:47.267 12.160 - 12.209: 92.1098% ( 38) 00:08:47.267 12.209 - 12.258: 92.6650% ( 52) 00:08:47.267 12.258 - 12.308: 92.9746% ( 29) 00:08:47.267 12.308 - 12.357: 93.3590% ( 36) 00:08:47.267 12.357 - 12.406: 93.7006% ( 32) 00:08:47.267 12.406 - 12.455: 93.9889% ( 27) 00:08:47.267 12.455 - 12.505: 94.2665% ( 26) 00:08:47.267 12.505 - 12.554: 94.5654% ( 28) 00:08:47.267 12.554 - 12.603: 94.7576% ( 18) 00:08:47.267 12.603 - 12.702: 95.0673% ( 29) 00:08:47.267 12.702 - 12.800: 95.3449% ( 26) 00:08:47.267 12.800 - 12.898: 95.5798% ( 22) 00:08:47.267 12.898 - 12.997: 95.6972% ( 11) 00:08:47.267 12.997 - 13.095: 95.7933% ( 9) 00:08:47.267 13.095 - 13.194: 95.9534% ( 15) 00:08:47.267 13.194 - 13.292: 96.0602% ( 10) 00:08:47.267 13.292 - 13.391: 96.1990% ( 13) 00:08:47.267 13.391 - 13.489: 96.2738% ( 7) 00:08:47.267 13.489 - 13.588: 96.4232% ( 14) 00:08:47.267 13.588 - 13.686: 96.5193% ( 9) 00:08:47.267 13.686 - 13.785: 96.5514% ( 3) 00:08:47.267 13.785 - 13.883: 96.5727% ( 2) 00:08:47.267 13.883 - 13.982: 96.6368% ( 6) 00:08:47.267 13.982 - 14.080: 96.6902% ( 5) 00:08:47.267 14.080 - 14.178: 96.7649% ( 7) 00:08:47.267 14.178 - 14.277: 96.8290% ( 6) 00:08:47.267 14.277 - 14.375: 96.8717% ( 4) 00:08:47.267 14.375 - 14.474: 96.8930% ( 2) 00:08:47.267 14.474 - 14.572: 96.9357% ( 4) 00:08:47.267 14.572 - 14.671: 96.9571% ( 2) 00:08:47.267 14.671 - 14.769: 96.9678% ( 1) 00:08:47.267 14.868 - 14.966: 97.0638% ( 9) 00:08:47.267 14.966 - 15.065: 97.0959% ( 3) 00:08:47.267 15.065 - 15.163: 97.1599% ( 6) 00:08:47.267 15.163 - 15.262: 97.2987% ( 13) 00:08:47.267 15.262 - 15.360: 97.3842% ( 8) 00:08:47.267 15.360 - 15.458: 97.4696% ( 8) 00:08:47.267 15.458 - 15.557: 97.5550% ( 8) 00:08:47.267 15.557 - 15.655: 97.5977% ( 4) 00:08:47.267 15.655 - 15.754: 97.6724% ( 7) 00:08:47.267 15.754 - 15.852: 97.7365% ( 6) 00:08:47.267 15.852 - 15.951: 97.7472% ( 1) 00:08:47.267 15.951 - 16.049: 97.7685% ( 2) 00:08:47.267 16.049 - 16.148: 97.8112% ( 4) 00:08:47.267 16.148 - 16.246: 97.8326% ( 2) 00:08:47.267 16.246 - 16.345: 97.8539% ( 2) 00:08:47.267 16.345 - 16.443: 97.8966% ( 4) 00:08:47.267 16.443 - 16.542: 98.0248% ( 12) 00:08:47.267 16.542 - 16.640: 98.0888% ( 6) 00:08:47.267 16.640 - 16.738: 98.2170% ( 12) 00:08:47.267 16.738 - 16.837: 98.3664% ( 14) 00:08:47.267 16.837 - 16.935: 98.5693% ( 19) 00:08:47.267 16.935 - 17.034: 98.6547% ( 8) 00:08:47.267 17.034 - 17.132: 98.7081% ( 5) 00:08:47.268 17.132 - 17.231: 98.8789% ( 16) 00:08:47.268 17.231 - 17.329: 98.9216% ( 4) 00:08:47.268 17.329 - 17.428: 98.9643% ( 4) 00:08:47.268 17.428 - 17.526: 98.9750% ( 1) 00:08:47.268 17.526 - 17.625: 99.0177% ( 4) 00:08:47.268 17.625 - 17.723: 99.0498% ( 3) 00:08:47.268 17.723 - 17.822: 99.0604% ( 1) 00:08:47.268 17.822 - 17.920: 99.0925% ( 3) 00:08:47.268 17.920 - 18.018: 99.1458% ( 5) 00:08:47.268 18.018 - 18.117: 99.1565% ( 1) 00:08:47.268 18.117 - 18.215: 99.1779% ( 2) 00:08:47.268 18.215 - 18.314: 99.1886% ( 1) 00:08:47.268 18.412 - 18.511: 99.1992% ( 1) 00:08:47.268 18.511 - 18.609: 99.2206% ( 2) 00:08:47.268 18.609 - 18.708: 99.2419% ( 2) 00:08:47.268 18.708 - 18.806: 99.2633% ( 2) 00:08:47.268 18.806 - 18.905: 99.2846% ( 2) 00:08:47.268 18.905 - 19.003: 99.3060% ( 2) 00:08:47.268 19.003 - 19.102: 99.3167% ( 1) 00:08:47.268 19.102 - 19.200: 99.3274% ( 1) 00:08:47.268 19.200 - 19.298: 99.3594% ( 3) 00:08:47.268 19.298 - 19.397: 99.3807% ( 2) 00:08:47.268 19.397 - 19.495: 99.4021% ( 2) 00:08:47.268 19.594 - 19.692: 99.4448% ( 4) 00:08:47.268 19.692 - 19.791: 99.5195% ( 7) 00:08:47.268 19.889 - 19.988: 99.5409% ( 2) 00:08:47.268 19.988 - 20.086: 99.5516% ( 1) 00:08:47.268 20.086 - 20.185: 99.5943% ( 4) 00:08:47.268 20.283 - 20.382: 99.6263% ( 3) 00:08:47.268 20.382 - 20.480: 99.6583% ( 3) 00:08:47.268 20.578 - 20.677: 99.6797% ( 2) 00:08:47.268 20.677 - 20.775: 99.7010% ( 2) 00:08:47.268 21.169 - 21.268: 99.7224% ( 2) 00:08:47.268 21.366 - 21.465: 99.7331% ( 1) 00:08:47.268 21.563 - 21.662: 99.7438% ( 1) 00:08:47.268 21.858 - 21.957: 99.7544% ( 1) 00:08:47.268 21.957 - 22.055: 99.7651% ( 1) 00:08:47.268 22.055 - 22.154: 99.7758% ( 1) 00:08:47.268 22.351 - 22.449: 99.7971% ( 2) 00:08:47.268 22.843 - 22.942: 99.8078% ( 1) 00:08:47.268 23.138 - 23.237: 99.8185% ( 1) 00:08:47.268 24.222 - 24.320: 99.8292% ( 1) 00:08:47.268 24.320 - 24.418: 99.8398% ( 1) 00:08:47.268 24.615 - 24.714: 99.8505% ( 1) 00:08:47.268 26.978 - 27.175: 99.8612% ( 1) 00:08:47.268 28.160 - 28.357: 99.8719% ( 1) 00:08:47.268 28.554 - 28.751: 99.8826% ( 1) 00:08:47.268 30.917 - 31.114: 99.8932% ( 1) 00:08:47.268 31.114 - 31.311: 99.9039% ( 1) 00:08:47.268 33.477 - 33.674: 99.9146% ( 1) 00:08:47.268 37.809 - 38.006: 99.9253% ( 1) 00:08:47.268 40.960 - 41.157: 99.9466% ( 2) 00:08:47.268 43.126 - 43.323: 99.9573% ( 1) 00:08:47.268 50.412 - 50.806: 99.9680% ( 1) 00:08:47.268 55.532 - 55.926: 99.9786% ( 1) 00:08:47.268 63.409 - 63.803: 99.9893% ( 1) 00:08:47.268 110.277 - 111.065: 100.0000% ( 1) 00:08:47.268 00:08:47.268 Complete histogram 00:08:47.268 ================== 00:08:47.268 Range in us Cumulative Count 00:08:47.268 7.286 - 7.335: 0.2456% ( 23) 00:08:47.268 7.335 - 7.385: 2.9362% ( 252) 00:08:47.268 7.385 - 7.434: 13.9120% ( 1028) 00:08:47.268 7.434 - 7.483: 30.2584% ( 1531) 00:08:47.268 7.483 - 7.532: 43.4230% ( 1233) 00:08:47.268 7.532 - 7.582: 51.8578% ( 790) 00:08:47.268 7.582 - 7.631: 57.3350% ( 513) 00:08:47.268 7.631 - 7.680: 60.5274% ( 299) 00:08:47.268 7.680 - 7.729: 62.4920% ( 184) 00:08:47.268 7.729 - 7.778: 63.5917% ( 103) 00:08:47.268 7.778 - 7.828: 64.3605% ( 72) 00:08:47.268 7.828 - 7.877: 64.8623% ( 47) 00:08:47.268 7.877 - 7.926: 65.1185% ( 24) 00:08:47.268 7.926 - 7.975: 65.3000% ( 17) 00:08:47.268 7.975 - 8.025: 65.5669% ( 25) 00:08:47.268 8.025 - 8.074: 66.0261% ( 43) 00:08:47.268 8.074 - 8.123: 66.5492% ( 49) 00:08:47.268 8.123 - 8.172: 67.0724% ( 49) 00:08:47.268 8.172 - 8.222: 67.5101% ( 41) 00:08:47.268 8.222 - 8.271: 67.8838% ( 35) 00:08:47.268 8.271 - 8.320: 68.1935% ( 29) 00:08:47.268 8.320 - 8.369: 68.3857% ( 18) 00:08:47.268 8.369 - 8.418: 68.5672% ( 17) 00:08:47.268 8.418 - 8.468: 68.7380% ( 16) 00:08:47.268 8.468 - 8.517: 68.8341% ( 9) 00:08:47.268 8.517 - 8.566: 68.9195% ( 8) 00:08:47.268 8.566 - 8.615: 68.9622% ( 4) 00:08:47.268 8.615 - 8.665: 69.0263% ( 6) 00:08:47.268 8.665 - 8.714: 69.0690% ( 4) 00:08:47.268 8.714 - 8.763: 69.0903% ( 2) 00:08:47.268 8.763 - 8.812: 69.1010% ( 1) 00:08:47.268 8.812 - 8.862: 69.1117% ( 1) 00:08:47.268 8.862 - 8.911: 69.1330% ( 2) 00:08:47.268 8.911 - 8.960: 69.1651% ( 3) 00:08:47.268 8.960 - 9.009: 69.1757% ( 1) 00:08:47.268 9.009 - 9.058: 69.1864% ( 1) 00:08:47.268 9.157 - 9.206: 69.1971% ( 1) 00:08:47.268 9.255 - 9.305: 69.2078% ( 1) 00:08:47.268 9.305 - 9.354: 69.2184% ( 1) 00:08:47.268 9.649 - 9.698: 69.2291% ( 1) 00:08:47.268 9.698 - 9.748: 69.2398% ( 1) 00:08:47.268 9.748 - 9.797: 69.2505% ( 1) 00:08:47.268 9.797 - 9.846: 69.2612% ( 1) 00:08:47.268 10.092 - 10.142: 69.2718% ( 1) 00:08:47.268 10.142 - 10.191: 69.2825% ( 1) 00:08:47.268 10.240 - 10.289: 69.3039% ( 2) 00:08:47.268 10.437 - 10.486: 69.3145% ( 1) 00:08:47.268 10.535 - 10.585: 69.3252% ( 1) 00:08:47.268 10.585 - 10.634: 69.3359% ( 1) 00:08:47.268 10.634 - 10.683: 69.3786% ( 4) 00:08:47.268 10.683 - 10.732: 69.5067% ( 12) 00:08:47.268 10.732 - 10.782: 69.7950% ( 27) 00:08:47.268 10.782 - 10.831: 70.3502% ( 52) 00:08:47.268 10.831 - 10.880: 71.2150% ( 81) 00:08:47.268 10.880 - 10.929: 72.7312% ( 142) 00:08:47.268 10.929 - 10.978: 74.5783% ( 173) 00:08:47.268 10.978 - 11.028: 77.0660% ( 233) 00:08:47.268 11.028 - 11.077: 80.0342% ( 278) 00:08:47.268 11.077 - 11.126: 83.0451% ( 282) 00:08:47.268 11.126 - 11.175: 85.3940% ( 220) 00:08:47.268 11.175 - 11.225: 87.3585% ( 184) 00:08:47.268 11.225 - 11.274: 88.8106% ( 136) 00:08:47.268 11.274 - 11.323: 89.8783% ( 100) 00:08:47.268 11.323 - 11.372: 90.9353% ( 99) 00:08:47.268 11.372 - 11.422: 91.6827% ( 70) 00:08:47.268 11.422 - 11.471: 92.4194% ( 69) 00:08:47.268 11.471 - 11.520: 93.0066% ( 55) 00:08:47.268 11.520 - 11.569: 93.4978% ( 46) 00:08:47.268 11.569 - 11.618: 93.9462% ( 42) 00:08:47.268 11.618 - 11.668: 94.2878% ( 32) 00:08:47.268 11.668 - 11.717: 94.6295% ( 32) 00:08:47.269 11.717 - 11.766: 94.9605% ( 31) 00:08:47.269 11.766 - 11.815: 95.3769% ( 39) 00:08:47.269 11.815 - 11.865: 95.6331% ( 24) 00:08:47.269 11.865 - 11.914: 95.8680% ( 22) 00:08:47.269 11.914 - 11.963: 96.1029% ( 22) 00:08:47.269 11.963 - 12.012: 96.4126% ( 29) 00:08:47.269 12.012 - 12.062: 96.6795% ( 25) 00:08:47.269 12.062 - 12.111: 96.9037% ( 21) 00:08:47.269 12.111 - 12.160: 97.1599% ( 24) 00:08:47.269 12.160 - 12.209: 97.3628% ( 19) 00:08:47.269 12.209 - 12.258: 97.5336% ( 16) 00:08:47.269 12.258 - 12.308: 97.6724% ( 13) 00:08:47.269 12.308 - 12.357: 97.7899% ( 11) 00:08:47.269 12.357 - 12.406: 97.8326% ( 4) 00:08:47.269 12.406 - 12.455: 97.9180% ( 8) 00:08:47.269 12.455 - 12.505: 97.9821% ( 6) 00:08:47.269 12.505 - 12.554: 98.0141% ( 3) 00:08:47.269 12.554 - 12.603: 98.0675% ( 5) 00:08:47.269 12.603 - 12.702: 98.1209% ( 5) 00:08:47.269 12.702 - 12.800: 98.2597% ( 13) 00:08:47.269 12.800 - 12.898: 98.3451% ( 8) 00:08:47.269 12.898 - 12.997: 98.4732% ( 12) 00:08:47.269 12.997 - 13.095: 98.5586% ( 8) 00:08:47.269 13.095 - 13.194: 98.7188% ( 15) 00:08:47.269 13.194 - 13.292: 98.7935% ( 7) 00:08:47.269 13.292 - 13.391: 98.8789% ( 8) 00:08:47.269 13.391 - 13.489: 98.9216% ( 4) 00:08:47.269 13.489 - 13.588: 98.9643% ( 4) 00:08:47.269 13.588 - 13.686: 99.0391% ( 7) 00:08:47.269 13.686 - 13.785: 99.0711% ( 3) 00:08:47.269 13.785 - 13.883: 99.0818% ( 1) 00:08:47.269 13.883 - 13.982: 99.0925% ( 1) 00:08:47.269 13.982 - 14.080: 99.1138% ( 2) 00:08:47.269 14.178 - 14.277: 99.1352% ( 2) 00:08:47.269 14.277 - 14.375: 99.1458% ( 1) 00:08:47.269 15.065 - 15.163: 99.1565% ( 1) 00:08:47.269 15.262 - 15.360: 99.1672% ( 1) 00:08:47.269 15.655 - 15.754: 99.1779% ( 1) 00:08:47.269 15.852 - 15.951: 99.1886% ( 1) 00:08:47.269 16.345 - 16.443: 99.1992% ( 1) 00:08:47.269 16.738 - 16.837: 99.2099% ( 1) 00:08:47.269 16.935 - 17.034: 99.2206% ( 1) 00:08:47.269 17.231 - 17.329: 99.2419% ( 2) 00:08:47.269 17.329 - 17.428: 99.2740% ( 3) 00:08:47.269 17.428 - 17.526: 99.2846% ( 1) 00:08:47.269 17.526 - 17.625: 99.2953% ( 1) 00:08:47.269 17.625 - 17.723: 99.3274% ( 3) 00:08:47.269 17.822 - 17.920: 99.3594% ( 3) 00:08:47.269 17.920 - 18.018: 99.3701% ( 1) 00:08:47.269 18.018 - 18.117: 99.4128% ( 4) 00:08:47.269 18.117 - 18.215: 99.4555% ( 4) 00:08:47.269 18.215 - 18.314: 99.4662% ( 1) 00:08:47.269 18.314 - 18.412: 99.4875% ( 2) 00:08:47.269 18.412 - 18.511: 99.4982% ( 1) 00:08:47.269 18.511 - 18.609: 99.5622% ( 6) 00:08:47.269 18.609 - 18.708: 99.5836% ( 2) 00:08:47.269 18.708 - 18.806: 99.5943% ( 1) 00:08:47.269 18.806 - 18.905: 99.6050% ( 1) 00:08:47.269 18.905 - 19.003: 99.6156% ( 1) 00:08:47.269 19.003 - 19.102: 99.6263% ( 1) 00:08:47.269 19.102 - 19.200: 99.6370% ( 1) 00:08:47.269 19.200 - 19.298: 99.6583% ( 2) 00:08:47.269 19.298 - 19.397: 99.6690% ( 1) 00:08:47.269 19.397 - 19.495: 99.6797% ( 1) 00:08:47.269 19.495 - 19.594: 99.6904% ( 1) 00:08:47.269 19.594 - 19.692: 99.7010% ( 1) 00:08:47.269 19.692 - 19.791: 99.7117% ( 1) 00:08:47.269 19.889 - 19.988: 99.7331% ( 2) 00:08:47.269 19.988 - 20.086: 99.7651% ( 3) 00:08:47.269 20.283 - 20.382: 99.7758% ( 1) 00:08:47.269 21.563 - 21.662: 99.7865% ( 1) 00:08:47.269 22.942 - 23.040: 99.8078% ( 2) 00:08:47.269 23.040 - 23.138: 99.8185% ( 1) 00:08:47.269 23.434 - 23.532: 99.8292% ( 1) 00:08:47.269 23.828 - 23.926: 99.8398% ( 1) 00:08:47.269 24.714 - 24.812: 99.8505% ( 1) 00:08:47.269 24.911 - 25.009: 99.8719% ( 2) 00:08:47.269 25.206 - 25.403: 99.8826% ( 1) 00:08:47.269 25.797 - 25.994: 99.8932% ( 1) 00:08:47.269 26.388 - 26.585: 99.9146% ( 2) 00:08:47.269 26.782 - 26.978: 99.9253% ( 1) 00:08:47.269 27.372 - 27.569: 99.9359% ( 1) 00:08:47.269 33.280 - 33.477: 99.9466% ( 1) 00:08:47.269 33.477 - 33.674: 99.9573% ( 1) 00:08:47.269 35.052 - 35.249: 99.9680% ( 1) 00:08:47.269 46.671 - 46.868: 99.9786% ( 1) 00:08:47.269 51.988 - 52.382: 99.9893% ( 1) 00:08:47.269 70.105 - 70.498: 100.0000% ( 1) 00:08:47.269 00:08:47.269 00:08:47.269 real 0m1.219s 00:08:47.269 user 0m1.078s 00:08:47.269 sys 0m0.091s 00:08:47.269 21:59:19 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.269 21:59:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:47.269 ************************************ 00:08:47.269 END TEST nvme_overhead 00:08:47.269 ************************************ 00:08:47.269 21:59:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:47.269 21:59:19 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:47.269 21:59:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.269 21:59:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.269 ************************************ 00:08:47.269 START TEST nvme_arbitration 00:08:47.269 ************************************ 00:08:47.269 21:59:19 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:50.565 Initializing NVMe Controllers 00:08:50.565 Attached to 0000:00:10.0 00:08:50.565 Attached to 0000:00:11.0 00:08:50.565 Attached to 0000:00:13.0 00:08:50.565 Attached to 0000:00:12.0 00:08:50.565 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:50.565 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:50.565 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:50.565 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:50.565 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:50.565 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:50.565 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:50.565 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:50.565 Initialization complete. Launching workers. 00:08:50.565 Starting thread on core 1 with urgent priority queue 00:08:50.565 Starting thread on core 2 with urgent priority queue 00:08:50.565 Starting thread on core 3 with urgent priority queue 00:08:50.565 Starting thread on core 0 with urgent priority queue 00:08:50.565 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.565 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:50.565 QEMU NVMe Ctrl (12341 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:08:50.565 QEMU NVMe Ctrl (12342 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:08:50.565 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:08:50.565 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:08:50.565 ======================================================== 00:08:50.565 00:08:50.565 00:08:50.565 real 0m3.294s 00:08:50.565 user 0m9.249s 00:08:50.565 sys 0m0.091s 00:08:50.565 21:59:23 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.565 21:59:23 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:50.565 ************************************ 00:08:50.565 END TEST nvme_arbitration 00:08:50.565 ************************************ 00:08:50.565 21:59:23 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.565 ************************************ 00:08:50.565 START TEST nvme_single_aen 00:08:50.565 ************************************ 00:08:50.565 21:59:23 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:50.565 Asynchronous Event Request test 00:08:50.565 Attached to 0000:00:10.0 00:08:50.565 Attached to 0000:00:11.0 00:08:50.565 Attached to 0000:00:13.0 00:08:50.565 Attached to 0000:00:12.0 00:08:50.565 Reset controller to setup AER completions for this process 00:08:50.565 Registering asynchronous event callbacks... 00:08:50.565 Getting orig temperature thresholds of all controllers 00:08:50.565 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.565 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.565 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.565 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.565 Setting all controllers temperature threshold low to trigger AER 00:08:50.565 Waiting for all controllers temperature threshold to be set lower 00:08:50.565 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.565 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:50.565 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.565 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:50.565 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.565 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:50.565 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.565 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:50.565 Waiting for all controllers to trigger AER and reset threshold 00:08:50.565 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.565 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.565 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.565 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.565 Cleaning up... 00:08:50.565 00:08:50.565 real 0m0.214s 00:08:50.565 user 0m0.085s 00:08:50.565 sys 0m0.086s 00:08:50.565 21:59:23 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.565 ************************************ 00:08:50.565 END TEST nvme_single_aen 00:08:50.565 21:59:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:50.565 ************************************ 00:08:50.565 21:59:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.565 21:59:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.565 ************************************ 00:08:50.565 START TEST nvme_doorbell_aers 00:08:50.565 ************************************ 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.565 21:59:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.826 [2024-12-06 21:59:23.643244] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:00.852 Executing: test_write_invalid_db 00:09:00.852 Waiting for AER completion... 00:09:00.852 Failure: test_write_invalid_db 00:09:00.852 00:09:00.852 Executing: test_invalid_db_write_overflow_sq 00:09:00.852 Waiting for AER completion... 00:09:00.852 Failure: test_invalid_db_write_overflow_sq 00:09:00.852 00:09:00.852 Executing: test_invalid_db_write_overflow_cq 00:09:00.852 Waiting for AER completion... 00:09:00.852 Failure: test_invalid_db_write_overflow_cq 00:09:00.852 00:09:00.852 21:59:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.852 21:59:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.852 [2024-12-06 21:59:33.701753] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:10.818 Executing: test_write_invalid_db 00:09:10.818 Waiting for AER completion... 00:09:10.818 Failure: test_write_invalid_db 00:09:10.818 00:09:10.818 Executing: test_invalid_db_write_overflow_sq 00:09:10.818 Waiting for AER completion... 00:09:10.818 Failure: test_invalid_db_write_overflow_sq 00:09:10.818 00:09:10.818 Executing: test_invalid_db_write_overflow_cq 00:09:10.818 Waiting for AER completion... 00:09:10.818 Failure: test_invalid_db_write_overflow_cq 00:09:10.818 00:09:10.818 21:59:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:10.819 21:59:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:11.079 [2024-12-06 21:59:43.700144] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:21.057 Executing: test_write_invalid_db 00:09:21.057 Waiting for AER completion... 00:09:21.057 Failure: test_write_invalid_db 00:09:21.057 00:09:21.057 Executing: test_invalid_db_write_overflow_sq 00:09:21.057 Waiting for AER completion... 00:09:21.057 Failure: test_invalid_db_write_overflow_sq 00:09:21.057 00:09:21.057 Executing: test_invalid_db_write_overflow_cq 00:09:21.057 Waiting for AER completion... 00:09:21.057 Failure: test_invalid_db_write_overflow_cq 00:09:21.057 00:09:21.057 21:59:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.057 21:59:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:21.057 [2024-12-06 21:59:53.746992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 Executing: test_write_invalid_db 00:09:31.070 Waiting for AER completion... 00:09:31.070 Failure: test_write_invalid_db 00:09:31.070 00:09:31.070 Executing: test_invalid_db_write_overflow_sq 00:09:31.070 Waiting for AER completion... 00:09:31.070 Failure: test_invalid_db_write_overflow_sq 00:09:31.070 00:09:31.070 Executing: test_invalid_db_write_overflow_cq 00:09:31.070 Waiting for AER completion... 00:09:31.070 Failure: test_invalid_db_write_overflow_cq 00:09:31.070 00:09:31.070 00:09:31.070 real 0m40.202s 00:09:31.070 user 0m34.285s 00:09:31.070 sys 0m5.557s 00:09:31.070 22:00:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.070 22:00:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:31.070 ************************************ 00:09:31.070 END TEST nvme_doorbell_aers 00:09:31.070 ************************************ 00:09:31.070 22:00:03 nvme -- nvme/nvme.sh@97 -- # uname 00:09:31.070 22:00:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:31.070 22:00:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:31.070 22:00:03 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:31.070 22:00:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.070 22:00:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.070 ************************************ 00:09:31.070 START TEST nvme_multi_aen 00:09:31.070 ************************************ 00:09:31.070 22:00:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:31.070 [2024-12-06 22:00:03.811705] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.811775] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.811787] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.813601] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.813642] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.813652] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.814889] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.814935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.814947] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.816094] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.816128] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 [2024-12-06 22:00:03.816138] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63271) is not found. Dropping the request. 00:09:31.070 Child process pid: 63793 00:09:31.331 [Child] Asynchronous Event Request test 00:09:31.331 [Child] Attached to 0000:00:10.0 00:09:31.331 [Child] Attached to 0000:00:11.0 00:09:31.331 [Child] Attached to 0000:00:13.0 00:09:31.331 [Child] Attached to 0000:00:12.0 00:09:31.331 [Child] Registering asynchronous event callbacks... 00:09:31.331 [Child] Getting orig temperature thresholds of all controllers 00:09:31.331 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:31.331 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 [Child] Cleaning up... 00:09:31.331 Asynchronous Event Request test 00:09:31.331 Attached to 0000:00:10.0 00:09:31.331 Attached to 0000:00:11.0 00:09:31.331 Attached to 0000:00:13.0 00:09:31.331 Attached to 0000:00:12.0 00:09:31.331 Reset controller to setup AER completions for this process 00:09:31.331 Registering asynchronous event callbacks... 00:09:31.331 Getting orig temperature thresholds of all controllers 00:09:31.331 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.331 Setting all controllers temperature threshold low to trigger AER 00:09:31.331 Waiting for all controllers temperature threshold to be set lower 00:09:31.331 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:31.331 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:31.331 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:31.331 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.331 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:31.331 Waiting for all controllers to trigger AER and reset threshold 00:09:31.331 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.331 Cleaning up... 00:09:31.331 00:09:31.331 real 0m0.465s 00:09:31.331 user 0m0.158s 00:09:31.331 sys 0m0.192s 00:09:31.331 22:00:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.331 ************************************ 00:09:31.331 END TEST nvme_multi_aen 00:09:31.331 22:00:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:31.331 ************************************ 00:09:31.331 22:00:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.331 22:00:04 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.331 22:00:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.331 22:00:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.331 ************************************ 00:09:31.331 START TEST nvme_startup 00:09:31.331 ************************************ 00:09:31.332 22:00:04 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.591 Initializing NVMe Controllers 00:09:31.591 Attached to 0000:00:10.0 00:09:31.591 Attached to 0000:00:11.0 00:09:31.592 Attached to 0000:00:13.0 00:09:31.592 Attached to 0000:00:12.0 00:09:31.592 Initialization complete. 00:09:31.592 Time used:149417.266 (us). 00:09:31.592 00:09:31.592 real 0m0.215s 00:09:31.592 user 0m0.073s 00:09:31.592 sys 0m0.094s 00:09:31.592 22:00:04 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.592 22:00:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:31.592 ************************************ 00:09:31.592 END TEST nvme_startup 00:09:31.592 ************************************ 00:09:31.592 22:00:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:31.592 22:00:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.592 22:00:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.592 22:00:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.592 ************************************ 00:09:31.592 START TEST nvme_multi_secondary 00:09:31.592 ************************************ 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63843 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63844 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:31.592 22:00:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:34.885 Initializing NVMe Controllers 00:09:34.885 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.885 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:34.885 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:34.885 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:34.885 Initialization complete. Launching workers. 00:09:34.885 ======================================================== 00:09:34.885 Latency(us) 00:09:34.885 Device Information : IOPS MiB/s Average min max 00:09:34.885 PCIE (0000:00:10.0) NSID 1 from core 2: 2310.50 9.03 6921.80 1699.92 20394.50 00:09:34.885 PCIE (0000:00:11.0) NSID 1 from core 2: 2310.50 9.03 6924.86 1590.98 19713.87 00:09:34.885 PCIE (0000:00:13.0) NSID 1 from core 2: 2310.50 9.03 6925.16 1571.16 20483.54 00:09:34.885 PCIE (0000:00:12.0) NSID 1 from core 2: 2310.50 9.03 6925.44 1692.61 19252.53 00:09:34.885 PCIE (0000:00:12.0) NSID 2 from core 2: 2310.50 9.03 6925.75 1660.93 20360.65 00:09:34.885 PCIE (0000:00:12.0) NSID 3 from core 2: 2310.50 9.03 6926.28 1685.82 15891.25 00:09:34.885 ======================================================== 00:09:34.885 Total : 13862.99 54.15 6924.88 1571.16 20483.54 00:09:34.885 00:09:34.885 Initializing NVMe Controllers 00:09:34.885 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.885 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.885 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:34.885 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:34.885 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:34.885 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:34.885 Initialization complete. Launching workers. 00:09:34.885 ======================================================== 00:09:34.885 Latency(us) 00:09:34.885 Device Information : IOPS MiB/s Average min max 00:09:34.885 PCIE (0000:00:10.0) NSID 1 from core 1: 5598.39 21.87 2856.45 1109.72 9908.15 00:09:34.885 PCIE (0000:00:11.0) NSID 1 from core 1: 5598.39 21.87 2857.55 1019.84 10568.61 00:09:34.885 PCIE (0000:00:13.0) NSID 1 from core 1: 5598.39 21.87 2857.51 1071.29 10408.87 00:09:34.885 PCIE (0000:00:12.0) NSID 1 from core 1: 5598.39 21.87 2857.55 1057.14 10817.97 00:09:34.885 PCIE (0000:00:12.0) NSID 2 from core 1: 5598.39 21.87 2857.52 1013.64 10770.29 00:09:34.885 PCIE (0000:00:12.0) NSID 3 from core 1: 5598.39 21.87 2857.49 1087.95 10561.47 00:09:34.885 ======================================================== 00:09:34.885 Total : 33590.35 131.21 2857.34 1013.64 10817.97 00:09:34.885 00:09:34.885 22:00:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63843 00:09:37.415 Initializing NVMe Controllers 00:09:37.415 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:37.415 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:37.415 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:37.415 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:37.415 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:37.415 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:37.415 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:37.415 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:37.415 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:37.415 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:37.415 Initialization complete. Launching workers. 00:09:37.415 ======================================================== 00:09:37.415 Latency(us) 00:09:37.415 Device Information : IOPS MiB/s Average min max 00:09:37.415 PCIE (0000:00:10.0) NSID 1 from core 0: 9612.14 37.55 1663.28 673.64 9027.77 00:09:37.415 PCIE (0000:00:11.0) NSID 1 from core 0: 9612.14 37.55 1664.23 690.59 9147.97 00:09:37.415 PCIE (0000:00:13.0) NSID 1 from core 0: 9612.14 37.55 1664.26 683.07 8755.98 00:09:37.415 PCIE (0000:00:12.0) NSID 1 from core 0: 9612.14 37.55 1664.30 689.41 8754.38 00:09:37.415 PCIE (0000:00:12.0) NSID 2 from core 0: 9612.14 37.55 1664.33 684.37 8972.72 00:09:37.415 PCIE (0000:00:12.0) NSID 3 from core 0: 9612.14 37.55 1664.37 692.41 8604.26 00:09:37.415 ======================================================== 00:09:37.415 Total : 57672.83 225.28 1664.13 673.64 9147.97 00:09:37.415 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63844 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63913 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63914 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:37.415 22:00:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:40.697 Initializing NVMe Controllers 00:09:40.697 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.697 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:40.697 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:40.697 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:40.697 Initialization complete. Launching workers. 00:09:40.697 ======================================================== 00:09:40.697 Latency(us) 00:09:40.697 Device Information : IOPS MiB/s Average min max 00:09:40.697 PCIE (0000:00:10.0) NSID 1 from core 1: 7796.56 30.46 2050.82 750.55 5774.42 00:09:40.697 PCIE (0000:00:11.0) NSID 1 from core 1: 7796.56 30.46 2051.86 741.20 5780.93 00:09:40.697 PCIE (0000:00:13.0) NSID 1 from core 1: 7796.56 30.46 2051.84 770.85 6212.40 00:09:40.697 PCIE (0000:00:12.0) NSID 1 from core 1: 7796.56 30.46 2051.82 774.29 5900.18 00:09:40.697 PCIE (0000:00:12.0) NSID 2 from core 1: 7796.56 30.46 2051.81 774.17 5843.43 00:09:40.697 PCIE (0000:00:12.0) NSID 3 from core 1: 7796.56 30.46 2051.89 771.55 6092.10 00:09:40.697 ======================================================== 00:09:40.697 Total : 46779.34 182.73 2051.67 741.20 6212.40 00:09:40.697 00:09:40.697 Initializing NVMe Controllers 00:09:40.697 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.697 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.697 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:40.697 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:40.697 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:40.697 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:40.697 Initialization complete. Launching workers. 00:09:40.697 ======================================================== 00:09:40.697 Latency(us) 00:09:40.697 Device Information : IOPS MiB/s Average min max 00:09:40.697 PCIE (0000:00:10.0) NSID 1 from core 0: 7806.39 30.49 2048.25 750.44 6041.22 00:09:40.697 PCIE (0000:00:11.0) NSID 1 from core 0: 7806.39 30.49 2049.21 771.49 5995.26 00:09:40.697 PCIE (0000:00:13.0) NSID 1 from core 0: 7806.39 30.49 2049.27 768.01 6132.95 00:09:40.697 PCIE (0000:00:12.0) NSID 1 from core 0: 7806.39 30.49 2049.33 750.30 6861.56 00:09:40.697 PCIE (0000:00:12.0) NSID 2 from core 0: 7806.39 30.49 2049.36 766.40 6882.36 00:09:40.697 PCIE (0000:00:12.0) NSID 3 from core 0: 7806.39 30.49 2049.42 774.88 6639.16 00:09:40.697 ======================================================== 00:09:40.697 Total : 46838.37 182.96 2049.14 750.30 6882.36 00:09:40.697 00:09:42.599 Initializing NVMe Controllers 00:09:42.599 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:42.599 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:42.599 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:42.599 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:42.599 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:42.599 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:42.599 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:42.599 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:42.599 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:42.599 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:42.599 Initialization complete. Launching workers. 00:09:42.599 ======================================================== 00:09:42.599 Latency(us) 00:09:42.599 Device Information : IOPS MiB/s Average min max 00:09:42.599 PCIE (0000:00:10.0) NSID 1 from core 2: 4406.34 17.21 3629.73 760.15 12704.09 00:09:42.599 PCIE (0000:00:11.0) NSID 1 from core 2: 4406.34 17.21 3630.78 753.06 13700.47 00:09:42.599 PCIE (0000:00:13.0) NSID 1 from core 2: 4406.34 17.21 3630.53 801.30 13791.44 00:09:42.599 PCIE (0000:00:12.0) NSID 1 from core 2: 4406.34 17.21 3630.30 785.58 15715.61 00:09:42.599 PCIE (0000:00:12.0) NSID 2 from core 2: 4406.34 17.21 3630.43 775.88 13676.23 00:09:42.599 PCIE (0000:00:12.0) NSID 3 from core 2: 4406.34 17.21 3630.38 767.42 13279.91 00:09:42.599 ======================================================== 00:09:42.599 Total : 26438.04 103.27 3630.36 753.06 15715.61 00:09:42.599 00:09:42.599 22:00:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63913 00:09:42.599 22:00:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63914 00:09:42.599 00:09:42.599 real 0m10.773s 00:09:42.599 user 0m18.385s 00:09:42.599 sys 0m0.647s 00:09:42.599 22:00:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.599 22:00:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:42.599 ************************************ 00:09:42.599 END TEST nvme_multi_secondary 00:09:42.599 ************************************ 00:09:42.599 22:00:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:42.599 22:00:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62876 ]] 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1094 -- # kill 62876 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1095 -- # wait 62876 00:09:42.599 [2024-12-06 22:00:15.195843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.195925] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.195957] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.195977] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.198467] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.198527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.198545] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.198564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.200669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.200705] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.200715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.200726] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.202836] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.202889] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.202900] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.202912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63786) is not found. Dropping the request. 00:09:42.599 [2024-12-06 22:00:15.322376] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:42.599 22:00:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.599 22:00:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.599 ************************************ 00:09:42.599 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:42.599 ************************************ 00:09:42.599 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:42.599 * Looking for test storage... 00:09:42.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.599 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.599 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.599 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:42.860 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.861 --rc genhtml_branch_coverage=1 00:09:42.861 --rc genhtml_function_coverage=1 00:09:42.861 --rc genhtml_legend=1 00:09:42.861 --rc geninfo_all_blocks=1 00:09:42.861 --rc geninfo_unexecuted_blocks=1 00:09:42.861 00:09:42.861 ' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.861 --rc genhtml_branch_coverage=1 00:09:42.861 --rc genhtml_function_coverage=1 00:09:42.861 --rc genhtml_legend=1 00:09:42.861 --rc geninfo_all_blocks=1 00:09:42.861 --rc geninfo_unexecuted_blocks=1 00:09:42.861 00:09:42.861 ' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.861 --rc genhtml_branch_coverage=1 00:09:42.861 --rc genhtml_function_coverage=1 00:09:42.861 --rc genhtml_legend=1 00:09:42.861 --rc geninfo_all_blocks=1 00:09:42.861 --rc geninfo_unexecuted_blocks=1 00:09:42.861 00:09:42.861 ' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.861 --rc genhtml_branch_coverage=1 00:09:42.861 --rc genhtml_function_coverage=1 00:09:42.861 --rc genhtml_legend=1 00:09:42.861 --rc geninfo_all_blocks=1 00:09:42.861 --rc geninfo_unexecuted_blocks=1 00:09:42.861 00:09:42.861 ' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64078 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64078 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64078 ']' 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.861 22:00:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:42.861 [2024-12-06 22:00:15.634933] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:09:42.861 [2024-12-06 22:00:15.635050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64078 ] 00:09:43.122 [2024-12-06 22:00:15.807756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.122 [2024-12-06 22:00:15.910255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.122 [2024-12-06 22:00:15.910368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.122 [2024-12-06 22:00:15.910751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.122 [2024-12-06 22:00:15.910752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.694 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.694 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:43.694 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:43.694 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.694 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:43.967 nvme0n1 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ZMpCw.txt 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:43.967 true 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733522416 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64101 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:43.967 22:00:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:45.908 [2024-12-06 22:00:18.618785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:45.908 [2024-12-06 22:00:18.619401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:45.908 [2024-12-06 22:00:18.619498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:45.908 [2024-12-06 22:00:18.619553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:45.908 [2024-12-06 22:00:18.621438] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:45.908 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64101 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64101 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64101 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:45.908 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ZMpCw.txt 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ZMpCw.txt 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64078 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64078 ']' 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64078 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64078 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.909 killing process with pid 64078 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64078' 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64078 00:09:45.909 22:00:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64078 00:09:47.284 22:00:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:47.284 22:00:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:47.284 ************************************ 00:09:47.284 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:47.284 ************************************ 00:09:47.284 00:09:47.284 real 0m4.611s 00:09:47.284 user 0m16.306s 00:09:47.284 sys 0m0.515s 00:09:47.284 22:00:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.284 22:00:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:47.284 22:00:19 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:47.284 22:00:19 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:47.284 22:00:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.284 22:00:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.284 22:00:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.284 ************************************ 00:09:47.284 START TEST nvme_fio 00:09:47.284 ************************************ 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:47.284 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:47.284 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:47.284 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:47.284 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.285 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:47.285 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:47.285 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:47.285 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:47.285 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:47.547 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:47.547 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:47.808 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:47.808 22:00:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:47.808 22:00:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:48.068 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:48.068 fio-3.35 00:09:48.068 Starting 1 thread 00:09:53.349 00:09:53.350 test: (groupid=0, jobs=1): err= 0: pid=64237: Fri Dec 6 22:00:25 2024 00:09:53.350 read: IOPS=20.2k, BW=79.0MiB/s (82.8MB/s)(158MiB/2001msec) 00:09:53.350 slat (usec): min=3, max=168, avg= 5.80, stdev= 2.78 00:09:53.350 clat (usec): min=204, max=7420, avg=2812.61, stdev=1007.27 00:09:53.350 lat (usec): min=209, max=7491, avg=2818.41, stdev=1008.76 00:09:53.350 clat percentiles (usec): 00:09:53.350 | 1.00th=[ 1221], 5.00th=[ 1467], 10.00th=[ 1778], 20.00th=[ 2376], 00:09:53.350 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2704], 00:09:53.350 | 70.00th=[ 2769], 80.00th=[ 3032], 90.00th=[ 3884], 95.00th=[ 5145], 00:09:53.350 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7046], 99.95th=[ 7177], 00:09:53.350 | 99.99th=[ 7373] 00:09:53.350 bw ( KiB/s): min=69976, max=84832, per=97.40%, avg=78776.00, stdev=7798.87, samples=3 00:09:53.350 iops : min=17494, max=21208, avg=19694.00, stdev=1949.72, samples=3 00:09:53.350 write: IOPS=20.2k, BW=78.8MiB/s (82.7MB/s)(158MiB/2001msec); 0 zone resets 00:09:53.350 slat (nsec): min=3453, max=85455, avg=6289.50, stdev=2755.08 00:09:53.350 clat (usec): min=238, max=20941, avg=3497.34, stdev=2166.40 00:09:53.350 lat (usec): min=243, max=20947, avg=3503.62, stdev=2167.08 00:09:53.350 clat percentiles (usec): 00:09:53.350 | 1.00th=[ 1352], 5.00th=[ 1827], 10.00th=[ 2343], 20.00th=[ 2540], 00:09:53.350 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:53.350 | 70.00th=[ 3032], 80.00th=[ 3818], 90.00th=[ 6259], 95.00th=[ 8455], 00:09:53.350 | 99.00th=[11731], 99.50th=[14222], 99.90th=[18220], 99.95th=[19268], 00:09:53.350 | 99.99th=[20841] 00:09:53.350 bw ( KiB/s): min=70576, max=85072, per=97.69%, avg=78850.67, stdev=7462.95, samples=3 00:09:53.350 iops : min=17644, max=21268, avg=19712.67, stdev=1865.74, samples=3 00:09:53.350 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.09% 00:09:53.350 lat (msec) : 2=9.67%, 4=76.17%, 10=12.54%, 20=1.49%, 50=0.02% 00:09:53.350 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=608 00:09:53.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:53.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.350 issued rwts: total=40458,40379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.350 00:09:53.350 Run status group 0 (all jobs): 00:09:53.350 READ: bw=79.0MiB/s (82.8MB/s), 79.0MiB/s-79.0MiB/s (82.8MB/s-82.8MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:53.350 WRITE: bw=78.8MiB/s (82.7MB/s), 78.8MiB/s-78.8MiB/s (82.7MB/s-82.7MB/s), io=158MiB (165MB), run=2001-2001msec 00:09:53.350 ----------------------------------------------------- 00:09:53.350 Suppressions used: 00:09:53.350 count bytes template 00:09:53.350 1 32 /usr/src/fio/parse.c 00:09:53.350 1 8 libtcmalloc_minimal.so 00:09:53.350 ----------------------------------------------------- 00:09:53.350 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:53.350 22:00:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:53.350 22:00:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:53.350 22:00:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:53.350 22:00:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:53.608 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:53.608 fio-3.35 00:09:53.608 Starting 1 thread 00:09:58.920 00:09:58.920 test: (groupid=0, jobs=1): err= 0: pid=64297: Fri Dec 6 22:00:31 2024 00:09:58.920 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:09:58.920 slat (nsec): min=3376, max=90291, avg=5216.39, stdev=2490.53 00:09:58.920 clat (usec): min=227, max=8046, avg=2762.68, stdev=948.19 00:09:58.920 lat (usec): min=232, max=8051, avg=2767.90, stdev=949.45 00:09:58.920 clat percentiles (usec): 00:09:58.920 | 1.00th=[ 1172], 5.00th=[ 1434], 10.00th=[ 1860], 20.00th=[ 2409], 00:09:58.920 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:58.920 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 3720], 95.00th=[ 4883], 00:09:58.920 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 7701], 00:09:58.920 | 99.99th=[ 7898] 00:09:58.920 bw ( KiB/s): min=63656, max=92672, per=98.31%, avg=80533.33, stdev=15077.24, samples=3 00:09:58.921 iops : min=15914, max=23168, avg=20133.33, stdev=3769.31, samples=3 00:09:58.921 write: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:09:58.921 slat (nsec): min=3499, max=76719, avg=5558.13, stdev=2585.93 00:09:58.921 clat (usec): min=249, max=25997, avg=3472.82, stdev=2712.34 00:09:58.921 lat (usec): min=254, max=26002, avg=3478.37, stdev=2712.91 00:09:58.921 clat percentiles (usec): 00:09:58.921 | 1.00th=[ 1319], 5.00th=[ 1811], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:58.921 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:58.921 | 70.00th=[ 2868], 80.00th=[ 3359], 90.00th=[ 5604], 95.00th=[ 8356], 00:09:58.921 | 99.00th=[17433], 99.50th=[20055], 99.90th=[23987], 99.95th=[24511], 00:09:58.921 | 99.99th=[25297] 00:09:58.921 bw ( KiB/s): min=64327, max=92800, per=98.63%, avg=80594.33, stdev=14664.61, samples=3 00:09:58.921 iops : min=16081, max=23200, avg=20148.33, stdev=3666.57, samples=3 00:09:58.921 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.09% 00:09:58.921 lat (msec) : 2=8.89%, 4=78.82%, 10=10.29%, 20=1.61%, 50=0.26% 00:09:58.921 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=608 00:09:58.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.921 issued rwts: total=40979,40879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.921 00:09:58.921 Run status group 0 (all jobs): 00:09:58.921 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:58.921 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (167MB), run=2001-2001msec 00:09:58.921 ----------------------------------------------------- 00:09:58.921 Suppressions used: 00:09:58.921 count bytes template 00:09:58.921 1 32 /usr/src/fio/parse.c 00:09:58.921 1 8 libtcmalloc_minimal.so 00:09:58.921 ----------------------------------------------------- 00:09:58.921 00:09:58.921 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:58.921 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:58.921 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:58.921 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:59.183 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:59.183 22:00:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:59.443 22:00:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:59.443 22:00:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:59.443 22:00:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:59.443 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:59.443 fio-3.35 00:09:59.443 Starting 1 thread 00:10:06.032 00:10:06.032 test: (groupid=0, jobs=1): err= 0: pid=64353: Fri Dec 6 22:00:38 2024 00:10:06.032 read: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(170MiB/2001msec) 00:10:06.032 slat (nsec): min=3339, max=68149, avg=5181.02, stdev=2410.90 00:10:06.032 clat (usec): min=518, max=7939, avg=2941.79, stdev=894.56 00:10:06.032 lat (usec): min=523, max=8007, avg=2946.97, stdev=896.02 00:10:06.032 clat percentiles (usec): 00:10:06.032 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:10:06.032 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2737], 00:10:06.032 | 70.00th=[ 2835], 80.00th=[ 3032], 90.00th=[ 4047], 95.00th=[ 5211], 00:10:06.032 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7046], 99.95th=[ 7111], 00:10:06.032 | 99.99th=[ 7767] 00:10:06.032 bw ( KiB/s): min=83424, max=87928, per=99.26%, avg=86120.00, stdev=2379.69, samples=3 00:10:06.032 iops : min=20856, max=21982, avg=21530.00, stdev=594.92, samples=3 00:10:06.032 write: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec); 0 zone resets 00:10:06.032 slat (nsec): min=3423, max=53848, avg=5430.87, stdev=2438.26 00:10:06.032 clat (usec): min=536, max=7805, avg=2957.78, stdev=902.61 00:10:06.032 lat (usec): min=542, max=7817, avg=2963.21, stdev=904.11 00:10:06.032 clat percentiles (usec): 00:10:06.032 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:10:06.032 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2737], 00:10:06.032 | 70.00th=[ 2835], 80.00th=[ 3064], 90.00th=[ 4113], 95.00th=[ 5276], 00:10:06.032 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7046], 99.95th=[ 7111], 00:10:06.032 | 99.99th=[ 7570] 00:10:06.032 bw ( KiB/s): min=84192, max=87464, per=100.00%, avg=86272.00, stdev=1807.73, samples=3 00:10:06.033 iops : min=21048, max=21866, avg=21568.00, stdev=451.93, samples=3 00:10:06.033 lat (usec) : 750=0.01% 00:10:06.033 lat (msec) : 2=0.85%, 4=88.79%, 10=10.34% 00:10:06.033 cpu : usr=99.20%, sys=0.05%, ctx=4, majf=0, minf=607 00:10:06.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:06.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.033 issued rwts: total=43401,43088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.033 00:10:06.033 Run status group 0 (all jobs): 00:10:06.033 READ: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=170MiB (178MB), run=2001-2001msec 00:10:06.033 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:06.033 ----------------------------------------------------- 00:10:06.033 Suppressions used: 00:10:06.033 count bytes template 00:10:06.033 1 32 /usr/src/fio/parse.c 00:10:06.033 1 8 libtcmalloc_minimal.so 00:10:06.033 ----------------------------------------------------- 00:10:06.033 00:10:06.033 22:00:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:06.033 22:00:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.033 22:00:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:06.033 22:00:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:06.292 22:00:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:06.292 22:00:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.553 22:00:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.553 22:00:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:06.553 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:06.553 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:06.553 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.553 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:06.553 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.554 22:00:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:06.815 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.815 fio-3.35 00:10:06.815 Starting 1 thread 00:10:14.956 00:10:14.956 test: (groupid=0, jobs=1): err= 0: pid=64424: Fri Dec 6 22:00:47 2024 00:10:14.956 read: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(130MiB/2001msec) 00:10:14.956 slat (nsec): min=4845, max=82503, avg=6529.91, stdev=3181.23 00:10:14.956 clat (usec): min=262, max=12099, avg=3804.76, stdev=1130.43 00:10:14.956 lat (usec): min=268, max=12166, avg=3811.29, stdev=1131.82 00:10:14.956 clat percentiles (usec): 00:10:14.956 | 1.00th=[ 2147], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3064], 00:10:14.956 | 30.00th=[ 3163], 40.00th=[ 3294], 50.00th=[ 3425], 60.00th=[ 3589], 00:10:14.956 | 70.00th=[ 3851], 80.00th=[ 4359], 90.00th=[ 5473], 95.00th=[ 6325], 00:10:14.956 | 99.00th=[ 7701], 99.50th=[ 8225], 99.90th=[ 8979], 99.95th=[ 9896], 00:10:14.956 | 99.99th=[11731] 00:10:14.956 bw ( KiB/s): min=64288, max=71120, per=100.00%, avg=68400.00, stdev=3622.47, samples=3 00:10:14.956 iops : min=16072, max=17780, avg=17100.00, stdev=905.62, samples=3 00:10:14.956 write: IOPS=16.7k, BW=65.3MiB/s (68.5MB/s)(131MiB/2001msec); 0 zone resets 00:10:14.956 slat (nsec): min=4991, max=69753, avg=6898.66, stdev=3282.77 00:10:14.956 clat (usec): min=342, max=11782, avg=3824.91, stdev=1124.76 00:10:14.956 lat (usec): min=349, max=11796, avg=3831.81, stdev=1126.18 00:10:14.956 clat percentiles (usec): 00:10:14.956 | 1.00th=[ 2212], 5.00th=[ 2835], 10.00th=[ 2966], 20.00th=[ 3097], 00:10:14.956 | 30.00th=[ 3195], 40.00th=[ 3294], 50.00th=[ 3425], 60.00th=[ 3621], 00:10:14.956 | 70.00th=[ 3851], 80.00th=[ 4359], 90.00th=[ 5473], 95.00th=[ 6390], 00:10:14.956 | 99.00th=[ 7701], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[ 9896], 00:10:14.956 | 99.99th=[11338] 00:10:14.956 bw ( KiB/s): min=64720, max=70520, per=100.00%, avg=68216.00, stdev=3078.25, samples=3 00:10:14.956 iops : min=16180, max=17630, avg=17054.00, stdev=769.56, samples=3 00:10:14.956 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:10:14.956 lat (msec) : 2=0.61%, 4=73.87%, 10=25.44%, 20=0.03% 00:10:14.956 cpu : usr=98.85%, sys=0.10%, ctx=5, majf=0, minf=605 00:10:14.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.956 issued rwts: total=33401,33464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.956 00:10:14.956 Run status group 0 (all jobs): 00:10:14.956 READ: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=130MiB (137MB), run=2001-2001msec 00:10:14.956 WRITE: bw=65.3MiB/s (68.5MB/s), 65.3MiB/s-65.3MiB/s (68.5MB/s-68.5MB/s), io=131MiB (137MB), run=2001-2001msec 00:10:14.956 ----------------------------------------------------- 00:10:14.956 Suppressions used: 00:10:14.956 count bytes template 00:10:14.956 1 32 /usr/src/fio/parse.c 00:10:14.956 1 8 libtcmalloc_minimal.so 00:10:14.956 ----------------------------------------------------- 00:10:14.956 00:10:14.956 22:00:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.956 22:00:47 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:14.956 00:10:14.956 real 0m27.522s 00:10:14.956 user 0m19.845s 00:10:14.956 sys 0m12.483s 00:10:14.956 22:00:47 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.956 22:00:47 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:14.956 ************************************ 00:10:14.956 END TEST nvme_fio 00:10:14.956 ************************************ 00:10:14.956 00:10:14.956 real 1m36.973s 00:10:14.956 user 3m41.545s 00:10:14.956 sys 0m22.803s 00:10:14.956 22:00:47 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.956 ************************************ 00:10:14.956 END TEST nvme 00:10:14.956 ************************************ 00:10:14.956 22:00:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.956 22:00:47 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:14.956 22:00:47 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:14.956 22:00:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.956 22:00:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.956 22:00:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.956 ************************************ 00:10:14.956 START TEST nvme_scc 00:10:14.956 ************************************ 00:10:14.956 22:00:47 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:14.956 * Looking for test storage... 00:10:14.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.956 22:00:47 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.956 22:00:47 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.956 22:00:47 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.956 22:00:47 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.956 22:00:47 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.957 22:00:47 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.957 22:00:47 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:14.957 22:00:47 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.957 22:00:47 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.957 --rc genhtml_branch_coverage=1 00:10:14.957 --rc genhtml_function_coverage=1 00:10:14.957 --rc genhtml_legend=1 00:10:14.957 --rc geninfo_all_blocks=1 00:10:14.957 --rc geninfo_unexecuted_blocks=1 00:10:14.957 00:10:14.957 ' 00:10:14.957 22:00:47 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.957 --rc genhtml_branch_coverage=1 00:10:14.957 --rc genhtml_function_coverage=1 00:10:14.957 --rc genhtml_legend=1 00:10:14.957 --rc geninfo_all_blocks=1 00:10:14.957 --rc geninfo_unexecuted_blocks=1 00:10:14.957 00:10:14.957 ' 00:10:14.957 22:00:47 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.957 --rc genhtml_branch_coverage=1 00:10:14.957 --rc genhtml_function_coverage=1 00:10:14.957 --rc genhtml_legend=1 00:10:14.957 --rc geninfo_all_blocks=1 00:10:14.957 --rc geninfo_unexecuted_blocks=1 00:10:14.957 00:10:14.957 ' 00:10:14.957 22:00:47 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.957 --rc genhtml_branch_coverage=1 00:10:14.957 --rc genhtml_function_coverage=1 00:10:14.957 --rc genhtml_legend=1 00:10:14.957 --rc geninfo_all_blocks=1 00:10:14.957 --rc geninfo_unexecuted_blocks=1 00:10:14.957 00:10:14.957 ' 00:10:14.957 22:00:47 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:14.957 22:00:47 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:14.957 22:00:47 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:14.957 22:00:47 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:14.957 22:00:47 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.957 22:00:47 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.216 22:00:47 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.216 22:00:47 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.216 22:00:47 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.216 22:00:47 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.216 22:00:47 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.216 22:00:47 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.216 22:00:47 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:15.216 22:00:47 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:15.216 22:00:47 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:15.216 22:00:47 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.216 22:00:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:15.216 22:00:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:15.216 22:00:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:15.216 22:00:47 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:15.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.475 Waiting for block devices as requested 00:10:15.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.733 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.733 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.990 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.281 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:21.281 22:00:53 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:21.281 22:00:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:21.281 22:00:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:21.281 22:00:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.281 22:00:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.281 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:21.282 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:21.283 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.284 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:21.285 22:00:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.286 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:21.287 22:00:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:21.287 22:00:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:21.287 22:00:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.287 22:00:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.287 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.288 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:21.289 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:21.290 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.291 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.292 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.293 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:21.294 22:00:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:21.294 22:00:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:21.294 22:00:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.294 22:00:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:21.294 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.295 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:21.296 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.297 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.298 22:00:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:21.298 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.299 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.300 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.301 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:21.302 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:21.303 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.304 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.305 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.306 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.570 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:21.571 22:00:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:21.571 22:00:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:21.571 22:00:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.571 22:00:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:21.571 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.572 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.573 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:21.574 22:00:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:21.574 22:00:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:21.574 22:00:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:22.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:22.716 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.716 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.716 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.716 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.716 22:00:55 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:22.716 22:00:55 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.716 22:00:55 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.716 22:00:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:22.716 ************************************ 00:10:22.716 START TEST nvme_simple_copy 00:10:22.716 ************************************ 00:10:22.716 22:00:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:22.978 Initializing NVMe Controllers 00:10:22.978 Attaching to 0000:00:10.0 00:10:22.978 Controller supports SCC. Attached to 0000:00:10.0 00:10:22.978 Namespace ID: 1 size: 6GB 00:10:22.978 Initialization complete. 00:10:22.978 00:10:22.978 Controller QEMU NVMe Ctrl (12340 ) 00:10:22.978 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:22.978 Namespace Block Size:4096 00:10:22.978 Writing LBAs 0 to 63 with Random Data 00:10:22.978 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:22.978 LBAs matching Written Data: 64 00:10:22.978 00:10:22.978 real 0m0.298s 00:10:22.978 user 0m0.110s 00:10:22.978 sys 0m0.085s 00:10:22.978 ************************************ 00:10:22.978 END TEST nvme_simple_copy 00:10:22.978 ************************************ 00:10:22.978 22:00:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.978 22:00:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:23.240 ************************************ 00:10:23.240 END TEST nvme_scc 00:10:23.240 ************************************ 00:10:23.240 00:10:23.240 real 0m8.198s 00:10:23.240 user 0m1.203s 00:10:23.240 sys 0m1.661s 00:10:23.240 22:00:55 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.240 22:00:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:23.240 22:00:55 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:23.240 22:00:55 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:23.240 22:00:55 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:23.240 22:00:55 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:23.240 22:00:55 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:23.240 22:00:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.240 22:00:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.240 22:00:55 -- common/autotest_common.sh@10 -- # set +x 00:10:23.240 ************************************ 00:10:23.240 START TEST nvme_fdp 00:10:23.240 ************************************ 00:10:23.240 22:00:55 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:23.240 * Looking for test storage... 00:10:23.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.240 22:00:55 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.240 22:00:55 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.240 22:00:55 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.240 22:00:56 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.240 --rc genhtml_branch_coverage=1 00:10:23.240 --rc genhtml_function_coverage=1 00:10:23.240 --rc genhtml_legend=1 00:10:23.240 --rc geninfo_all_blocks=1 00:10:23.240 --rc geninfo_unexecuted_blocks=1 00:10:23.240 00:10:23.240 ' 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.240 --rc genhtml_branch_coverage=1 00:10:23.240 --rc genhtml_function_coverage=1 00:10:23.240 --rc genhtml_legend=1 00:10:23.240 --rc geninfo_all_blocks=1 00:10:23.240 --rc geninfo_unexecuted_blocks=1 00:10:23.240 00:10:23.240 ' 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.240 --rc genhtml_branch_coverage=1 00:10:23.240 --rc genhtml_function_coverage=1 00:10:23.240 --rc genhtml_legend=1 00:10:23.240 --rc geninfo_all_blocks=1 00:10:23.240 --rc geninfo_unexecuted_blocks=1 00:10:23.240 00:10:23.240 ' 00:10:23.240 22:00:56 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.240 --rc genhtml_branch_coverage=1 00:10:23.240 --rc genhtml_function_coverage=1 00:10:23.240 --rc genhtml_legend=1 00:10:23.240 --rc geninfo_all_blocks=1 00:10:23.240 --rc geninfo_unexecuted_blocks=1 00:10:23.240 00:10:23.240 ' 00:10:23.240 22:00:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.241 22:00:56 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.241 22:00:56 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.241 22:00:56 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.241 22:00:56 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.241 22:00:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.241 22:00:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.241 22:00:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.241 22:00:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:23.241 22:00:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:23.241 22:00:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:23.241 22:00:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.241 22:00:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:23.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.815 Waiting for block devices as requested 00:10:23.815 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.077 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.077 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.355 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:29.355 22:01:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:29.355 22:01:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:29.355 22:01:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:29.355 22:01:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.355 22:01:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.355 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.356 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.357 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:29.358 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:29.359 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.360 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.361 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:29.362 22:01:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:29.362 22:01:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:29.362 22:01:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.362 22:01:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.362 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.363 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:29.364 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.365 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.366 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:29.367 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:29.368 22:01:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:29.368 22:01:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:29.368 22:01:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:29.368 22:01:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.369 22:01:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.369 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.640 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.641 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:29.642 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.643 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:29.644 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.645 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:29.646 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.647 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:29.648 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:29.649 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.650 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.651 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:29.652 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.653 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:29.654 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:29.655 22:01:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:29.655 22:01:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:29.655 22:01:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.655 22:01:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.655 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:29.656 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:29.657 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.658 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:29.659 22:01:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:29.659 22:01:02 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:29.659 22:01:02 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:30.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.794 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.794 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.794 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.794 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.794 22:01:03 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:30.794 22:01:03 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.794 22:01:03 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.794 22:01:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:30.794 ************************************ 00:10:30.794 START TEST nvme_flexible_data_placement 00:10:30.794 ************************************ 00:10:30.794 22:01:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:31.055 Initializing NVMe Controllers 00:10:31.055 Attaching to 0000:00:13.0 00:10:31.055 Controller supports FDP Attached to 0000:00:13.0 00:10:31.055 Namespace ID: 1 Endurance Group ID: 1 00:10:31.055 Initialization complete. 00:10:31.055 00:10:31.055 ================================== 00:10:31.055 == FDP tests for Namespace: #01 == 00:10:31.055 ================================== 00:10:31.055 00:10:31.055 Get Feature: FDP: 00:10:31.055 ================= 00:10:31.055 Enabled: Yes 00:10:31.055 FDP configuration Index: 0 00:10:31.055 00:10:31.055 FDP configurations log page 00:10:31.055 =========================== 00:10:31.055 Number of FDP configurations: 1 00:10:31.055 Version: 0 00:10:31.055 Size: 112 00:10:31.055 FDP Configuration Descriptor: 0 00:10:31.055 Descriptor Size: 96 00:10:31.055 Reclaim Group Identifier format: 2 00:10:31.055 FDP Volatile Write Cache: Not Present 00:10:31.055 FDP Configuration: Valid 00:10:31.055 Vendor Specific Size: 0 00:10:31.055 Number of Reclaim Groups: 2 00:10:31.055 Number of Recalim Unit Handles: 8 00:10:31.055 Max Placement Identifiers: 128 00:10:31.055 Number of Namespaces Suppprted: 256 00:10:31.055 Reclaim unit Nominal Size: 6000000 bytes 00:10:31.055 Estimated Reclaim Unit Time Limit: Not Reported 00:10:31.055 RUH Desc #000: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #001: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #002: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #003: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #004: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #005: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #006: RUH Type: Initially Isolated 00:10:31.055 RUH Desc #007: RUH Type: Initially Isolated 00:10:31.055 00:10:31.055 FDP reclaim unit handle usage log page 00:10:31.055 ====================================== 00:10:31.055 Number of Reclaim Unit Handles: 8 00:10:31.055 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:31.055 RUH Usage Desc #001: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #002: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #003: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #004: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #005: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #006: RUH Attributes: Unused 00:10:31.055 RUH Usage Desc #007: RUH Attributes: Unused 00:10:31.055 00:10:31.055 FDP statistics log page 00:10:31.055 ======================= 00:10:31.055 Host bytes with metadata written: 891891712 00:10:31.055 Media bytes with metadata written: 893628416 00:10:31.055 Media bytes erased: 0 00:10:31.055 00:10:31.055 FDP Reclaim unit handle status 00:10:31.055 ============================== 00:10:31.055 Number of RUHS descriptors: 2 00:10:31.055 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000d6d 00:10:31.055 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:31.055 00:10:31.055 FDP write on placement id: 0 success 00:10:31.055 00:10:31.055 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:31.055 00:10:31.055 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:31.055 00:10:31.055 Get Feature: FDP Events for Placement handle: #0 00:10:31.055 ======================== 00:10:31.055 Number of FDP Events: 6 00:10:31.055 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:31.055 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:31.055 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:31.056 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:31.056 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:31.056 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:31.056 00:10:31.056 FDP events log page 00:10:31.056 =================== 00:10:31.056 Number of FDP events: 1 00:10:31.056 FDP Event #0: 00:10:31.056 Event Type: RU Not Written to Capacity 00:10:31.056 Placement Identifier: Valid 00:10:31.056 NSID: Valid 00:10:31.056 Location: Valid 00:10:31.056 Placement Identifier: 0 00:10:31.056 Event Timestamp: 6 00:10:31.056 Namespace Identifier: 1 00:10:31.056 Reclaim Group Identifier: 0 00:10:31.056 Reclaim Unit Handle Identifier: 0 00:10:31.056 00:10:31.056 FDP test passed 00:10:31.056 00:10:31.056 real 0m0.250s 00:10:31.056 user 0m0.080s 00:10:31.056 sys 0m0.068s 00:10:31.056 22:01:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.056 22:01:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:31.056 ************************************ 00:10:31.056 END TEST nvme_flexible_data_placement 00:10:31.056 ************************************ 00:10:31.317 00:10:31.317 real 0m8.017s 00:10:31.317 user 0m1.140s 00:10:31.317 sys 0m1.567s 00:10:31.317 22:01:03 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.317 22:01:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:31.317 ************************************ 00:10:31.317 END TEST nvme_fdp 00:10:31.317 ************************************ 00:10:31.317 22:01:03 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:31.317 22:01:03 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:31.317 22:01:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.317 22:01:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.317 22:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:31.317 ************************************ 00:10:31.317 START TEST nvme_rpc 00:10:31.317 ************************************ 00:10:31.317 22:01:03 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:31.317 * Looking for test storage... 00:10:31.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:31.317 22:01:04 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:31.317 22:01:04 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.318 22:01:04 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.318 --rc genhtml_branch_coverage=1 00:10:31.318 --rc genhtml_function_coverage=1 00:10:31.318 --rc genhtml_legend=1 00:10:31.318 --rc geninfo_all_blocks=1 00:10:31.318 --rc geninfo_unexecuted_blocks=1 00:10:31.318 00:10:31.318 ' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.318 --rc genhtml_branch_coverage=1 00:10:31.318 --rc genhtml_function_coverage=1 00:10:31.318 --rc genhtml_legend=1 00:10:31.318 --rc geninfo_all_blocks=1 00:10:31.318 --rc geninfo_unexecuted_blocks=1 00:10:31.318 00:10:31.318 ' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.318 --rc genhtml_branch_coverage=1 00:10:31.318 --rc genhtml_function_coverage=1 00:10:31.318 --rc genhtml_legend=1 00:10:31.318 --rc geninfo_all_blocks=1 00:10:31.318 --rc geninfo_unexecuted_blocks=1 00:10:31.318 00:10:31.318 ' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.318 --rc genhtml_branch_coverage=1 00:10:31.318 --rc genhtml_function_coverage=1 00:10:31.318 --rc genhtml_legend=1 00:10:31.318 --rc geninfo_all_blocks=1 00:10:31.318 --rc geninfo_unexecuted_blocks=1 00:10:31.318 00:10:31.318 ' 00:10:31.318 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.318 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:31.318 22:01:04 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:31.580 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:31.580 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65808 00:10:31.580 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:31.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.580 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65808 00:10:31.580 22:01:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65808 ']' 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.580 22:01:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.580 [2024-12-06 22:01:04.282567] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:10:31.580 [2024-12-06 22:01:04.282697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65808 ] 00:10:31.580 [2024-12-06 22:01:04.444457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.842 [2024-12-06 22:01:04.579946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.842 [2024-12-06 22:01:04.579970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.431 22:01:05 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.431 22:01:05 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:32.431 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:33.002 Nvme0n1 00:10:33.002 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:33.002 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:33.002 request: 00:10:33.002 { 00:10:33.002 "bdev_name": "Nvme0n1", 00:10:33.002 "filename": "non_existing_file", 00:10:33.002 "method": "bdev_nvme_apply_firmware", 00:10:33.002 "req_id": 1 00:10:33.002 } 00:10:33.002 Got JSON-RPC error response 00:10:33.002 response: 00:10:33.002 { 00:10:33.002 "code": -32603, 00:10:33.002 "message": "open file failed." 00:10:33.002 } 00:10:33.002 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:33.002 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:33.002 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:33.259 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:33.259 22:01:05 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65808 00:10:33.259 22:01:05 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65808 ']' 00:10:33.259 22:01:05 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65808 00:10:33.259 22:01:05 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65808 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.259 killing process with pid 65808 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65808' 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65808 00:10:33.259 22:01:06 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65808 00:10:34.632 ************************************ 00:10:34.632 END TEST nvme_rpc 00:10:34.632 ************************************ 00:10:34.632 00:10:34.632 real 0m3.478s 00:10:34.632 user 0m6.543s 00:10:34.632 sys 0m0.579s 00:10:34.632 22:01:07 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.632 22:01:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.891 22:01:07 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:34.891 22:01:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.891 22:01:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.891 22:01:07 -- common/autotest_common.sh@10 -- # set +x 00:10:34.891 ************************************ 00:10:34.891 START TEST nvme_rpc_timeouts 00:10:34.891 ************************************ 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:34.891 * Looking for test storage... 00:10:34.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.891 22:01:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.891 --rc genhtml_branch_coverage=1 00:10:34.891 --rc genhtml_function_coverage=1 00:10:34.891 --rc genhtml_legend=1 00:10:34.891 --rc geninfo_all_blocks=1 00:10:34.891 --rc geninfo_unexecuted_blocks=1 00:10:34.891 00:10:34.891 ' 00:10:34.891 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.891 --rc genhtml_branch_coverage=1 00:10:34.891 --rc genhtml_function_coverage=1 00:10:34.891 --rc genhtml_legend=1 00:10:34.892 --rc geninfo_all_blocks=1 00:10:34.892 --rc geninfo_unexecuted_blocks=1 00:10:34.892 00:10:34.892 ' 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.892 --rc genhtml_branch_coverage=1 00:10:34.892 --rc genhtml_function_coverage=1 00:10:34.892 --rc genhtml_legend=1 00:10:34.892 --rc geninfo_all_blocks=1 00:10:34.892 --rc geninfo_unexecuted_blocks=1 00:10:34.892 00:10:34.892 ' 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.892 --rc genhtml_branch_coverage=1 00:10:34.892 --rc genhtml_function_coverage=1 00:10:34.892 --rc genhtml_legend=1 00:10:34.892 --rc geninfo_all_blocks=1 00:10:34.892 --rc geninfo_unexecuted_blocks=1 00:10:34.892 00:10:34.892 ' 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65873 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65873 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65910 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65910 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65910 ']' 00:10:34.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.892 22:01:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:34.892 22:01:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:34.892 [2024-12-06 22:01:07.728540] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:10:34.892 [2024-12-06 22:01:07.728674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65910 ] 00:10:35.150 [2024-12-06 22:01:07.890941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.150 [2024-12-06 22:01:07.994068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.150 [2024-12-06 22:01:07.994216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.085 22:01:08 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.085 22:01:08 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:36.085 Checking default timeout settings: 00:10:36.085 22:01:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:36.085 22:01:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:36.085 Making settings changes with rpc: 00:10:36.085 22:01:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:36.085 22:01:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:36.343 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:36.343 Check default vs. modified settings: 00:10:36.343 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65873 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65873 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:36.600 Setting action_on_timeout is changed as expected. 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65873 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.600 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65873 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:36.857 Setting timeout_us is changed as expected. 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65873 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65873 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:36.857 Setting timeout_admin_us is changed as expected. 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65873 /tmp/settings_modified_65873 00:10:36.857 22:01:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65910 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65910 ']' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65910 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65910 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.857 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.858 killing process with pid 65910 00:10:36.858 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65910' 00:10:36.858 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65910 00:10:36.858 22:01:09 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65910 00:10:38.227 RPC TIMEOUT SETTING TEST PASSED. 00:10:38.227 22:01:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:38.227 00:10:38.227 real 0m3.548s 00:10:38.227 user 0m6.804s 00:10:38.227 sys 0m0.512s 00:10:38.227 22:01:11 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.227 22:01:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 ************************************ 00:10:38.227 END TEST nvme_rpc_timeouts 00:10:38.227 ************************************ 00:10:38.227 22:01:11 -- spdk/autotest.sh@239 -- # uname -s 00:10:38.227 22:01:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:38.227 22:01:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:38.227 22:01:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.228 22:01:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.228 22:01:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.485 ************************************ 00:10:38.485 START TEST sw_hotplug 00:10:38.485 ************************************ 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:38.485 * Looking for test storage... 00:10:38.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.485 22:01:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.485 --rc genhtml_branch_coverage=1 00:10:38.485 --rc genhtml_function_coverage=1 00:10:38.485 --rc genhtml_legend=1 00:10:38.485 --rc geninfo_all_blocks=1 00:10:38.485 --rc geninfo_unexecuted_blocks=1 00:10:38.485 00:10:38.485 ' 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.485 --rc genhtml_branch_coverage=1 00:10:38.485 --rc genhtml_function_coverage=1 00:10:38.485 --rc genhtml_legend=1 00:10:38.485 --rc geninfo_all_blocks=1 00:10:38.485 --rc geninfo_unexecuted_blocks=1 00:10:38.485 00:10:38.485 ' 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.485 --rc genhtml_branch_coverage=1 00:10:38.485 --rc genhtml_function_coverage=1 00:10:38.485 --rc genhtml_legend=1 00:10:38.485 --rc geninfo_all_blocks=1 00:10:38.485 --rc geninfo_unexecuted_blocks=1 00:10:38.485 00:10:38.485 ' 00:10:38.485 22:01:11 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.485 --rc genhtml_branch_coverage=1 00:10:38.485 --rc genhtml_function_coverage=1 00:10:38.485 --rc genhtml_legend=1 00:10:38.485 --rc geninfo_all_blocks=1 00:10:38.485 --rc geninfo_unexecuted_blocks=1 00:10:38.485 00:10:38.485 ' 00:10:38.485 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:38.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.000 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.000 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.000 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.000 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:39.000 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:39.000 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:39.000 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:39.000 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.000 22:01:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:39.001 22:01:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:39.001 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:39.001 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:39.001 22:01:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:39.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.514 Waiting for block devices as requested 00:10:39.514 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.514 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.514 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.771 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.039 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:45.039 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:45.039 22:01:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.039 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:45.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.039 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:45.297 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:45.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.556 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:45.556 22:01:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66767 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:45.814 22:01:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:45.814 22:01:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:45.814 22:01:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:45.814 22:01:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:45.814 22:01:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:45.814 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:45.815 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:45.815 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:45.815 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:45.815 22:01:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:45.815 Initializing NVMe Controllers 00:10:45.815 Attaching to 0000:00:10.0 00:10:45.815 Attaching to 0000:00:11.0 00:10:45.815 Attached to 0000:00:10.0 00:10:46.073 Attached to 0000:00:11.0 00:10:46.073 Initialization complete. Starting I/O... 00:10:46.073 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:46.073 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:46.073 00:10:47.008 QEMU NVMe Ctrl (12340 ): 2481 I/Os completed (+2481) 00:10:47.008 QEMU NVMe Ctrl (12341 ): 2770 I/Os completed (+2770) 00:10:47.008 00:10:47.944 QEMU NVMe Ctrl (12340 ): 6339 I/Os completed (+3858) 00:10:47.944 QEMU NVMe Ctrl (12341 ): 6859 I/Os completed (+4089) 00:10:47.944 00:10:48.880 QEMU NVMe Ctrl (12340 ): 9756 I/Os completed (+3417) 00:10:48.880 QEMU NVMe Ctrl (12341 ): 10516 I/Os completed (+3657) 00:10:48.880 00:10:50.254 QEMU NVMe Ctrl (12340 ): 13071 I/Os completed (+3315) 00:10:50.255 QEMU NVMe Ctrl (12341 ): 14272 I/Os completed (+3756) 00:10:50.255 00:10:50.820 QEMU NVMe Ctrl (12340 ): 16927 I/Os completed (+3856) 00:10:50.820 QEMU NVMe Ctrl (12341 ): 17769 I/Os completed (+3497) 00:10:50.820 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.754 [2024-12-06 22:01:24.492782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.754 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:51.754 [2024-12-06 22:01:24.493933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.494072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.494108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.494167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:51.754 [2024-12-06 22:01:24.495944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.496052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.496080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.496143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.754 [2024-12-06 22:01:24.518626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.754 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:51.754 [2024-12-06 22:01:24.519578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.519695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.519753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.519778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:51.754 [2024-12-06 22:01:24.521334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.521369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.521382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 [2024-12-06 22:01:24.521395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.754 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:51.754 EAL: Scan for (pci) bus failed. 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.754 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.012 Attaching to 0000:00:10.0 00:10:52.012 Attached to 0000:00:10.0 00:10:52.012 QEMU NVMe Ctrl (12340 ): 16 I/Os completed (+16) 00:10:52.012 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.012 22:01:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.012 Attaching to 0000:00:11.0 00:10:52.012 Attached to 0000:00:11.0 00:10:52.946 QEMU NVMe Ctrl (12340 ): 3640 I/Os completed (+3624) 00:10:52.946 QEMU NVMe Ctrl (12341 ): 3274 I/Os completed (+3274) 00:10:52.946 00:10:53.881 QEMU NVMe Ctrl (12340 ): 7068 I/Os completed (+3428) 00:10:53.881 QEMU NVMe Ctrl (12341 ): 6705 I/Os completed (+3431) 00:10:53.881 00:10:55.254 QEMU NVMe Ctrl (12340 ): 10734 I/Os completed (+3666) 00:10:55.254 QEMU NVMe Ctrl (12341 ): 10423 I/Os completed (+3718) 00:10:55.254 00:10:55.821 QEMU NVMe Ctrl (12340 ): 14387 I/Os completed (+3653) 00:10:55.822 QEMU NVMe Ctrl (12341 ): 14340 I/Os completed (+3917) 00:10:55.822 00:10:57.200 QEMU NVMe Ctrl (12340 ): 17537 I/Os completed (+3150) 00:10:57.200 QEMU NVMe Ctrl (12341 ): 17681 I/Os completed (+3341) 00:10:57.200 00:10:58.133 QEMU NVMe Ctrl (12340 ): 20786 I/Os completed (+3249) 00:10:58.133 QEMU NVMe Ctrl (12341 ): 20978 I/Os completed (+3297) 00:10:58.133 00:10:59.066 QEMU NVMe Ctrl (12340 ): 24426 I/Os completed (+3640) 00:10:59.066 QEMU NVMe Ctrl (12341 ): 24517 I/Os completed (+3539) 00:10:59.066 00:11:00.004 QEMU NVMe Ctrl (12340 ): 27881 I/Os completed (+3455) 00:11:00.004 QEMU NVMe Ctrl (12341 ): 27983 I/Os completed (+3466) 00:11:00.004 00:11:00.940 QEMU NVMe Ctrl (12340 ): 31497 I/Os completed (+3616) 00:11:00.940 QEMU NVMe Ctrl (12341 ): 31769 I/Os completed (+3786) 00:11:00.940 00:11:01.874 QEMU NVMe Ctrl (12340 ): 35070 I/Os completed (+3573) 00:11:01.874 QEMU NVMe Ctrl (12341 ): 35359 I/Os completed (+3590) 00:11:01.874 00:11:03.246 QEMU NVMe Ctrl (12340 ): 38630 I/Os completed (+3560) 00:11:03.247 QEMU NVMe Ctrl (12341 ): 38740 I/Os completed (+3381) 00:11:03.247 00:11:04.182 QEMU NVMe Ctrl (12340 ): 42265 I/Os completed (+3635) 00:11:04.182 QEMU NVMe Ctrl (12341 ): 42394 I/Os completed (+3654) 00:11:04.182 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.182 [2024-12-06 22:01:36.759128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.182 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:04.182 [2024-12-06 22:01:36.760244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.760406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.760441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.760820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.182 [2024-12-06 22:01:36.767671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.767795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.767859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.767901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:04.182 [2024-12-06 22:01:36.790294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.182 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.182 [2024-12-06 22:01:36.791338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.791755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.791790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.791811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.182 [2024-12-06 22:01:36.793545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.793844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.793867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 [2024-12-06 22:01:36.793882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.182 22:01:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.182 Attaching to 0000:00:10.0 00:11:04.182 Attached to 0000:00:10.0 00:11:04.182 22:01:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.182 22:01:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.182 22:01:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.182 Attaching to 0000:00:11.0 00:11:04.182 Attached to 0000:00:11.0 00:11:05.118 QEMU NVMe Ctrl (12340 ): 2574 I/Os completed (+2574) 00:11:05.118 QEMU NVMe Ctrl (12341 ): 2365 I/Os completed (+2365) 00:11:05.118 00:11:06.055 QEMU NVMe Ctrl (12340 ): 6002 I/Os completed (+3428) 00:11:06.055 QEMU NVMe Ctrl (12341 ): 5955 I/Os completed (+3590) 00:11:06.055 00:11:06.990 QEMU NVMe Ctrl (12340 ): 9301 I/Os completed (+3299) 00:11:06.990 QEMU NVMe Ctrl (12341 ): 10105 I/Os completed (+4150) 00:11:06.990 00:11:07.926 QEMU NVMe Ctrl (12340 ): 12573 I/Os completed (+3272) 00:11:07.926 QEMU NVMe Ctrl (12341 ): 13442 I/Os completed (+3337) 00:11:07.926 00:11:08.862 QEMU NVMe Ctrl (12340 ): 15851 I/Os completed (+3278) 00:11:08.862 QEMU NVMe Ctrl (12341 ): 16719 I/Os completed (+3277) 00:11:08.862 00:11:10.251 QEMU NVMe Ctrl (12340 ): 18982 I/Os completed (+3131) 00:11:10.251 QEMU NVMe Ctrl (12341 ): 19836 I/Os completed (+3117) 00:11:10.251 00:11:10.839 QEMU NVMe Ctrl (12340 ): 22259 I/Os completed (+3277) 00:11:10.839 QEMU NVMe Ctrl (12341 ): 23078 I/Os completed (+3242) 00:11:10.839 00:11:12.215 QEMU NVMe Ctrl (12340 ): 25571 I/Os completed (+3312) 00:11:12.215 QEMU NVMe Ctrl (12341 ): 26372 I/Os completed (+3294) 00:11:12.215 00:11:13.147 QEMU NVMe Ctrl (12340 ): 28738 I/Os completed (+3167) 00:11:13.147 QEMU NVMe Ctrl (12341 ): 29736 I/Os completed (+3364) 00:11:13.147 00:11:14.079 QEMU NVMe Ctrl (12340 ): 31969 I/Os completed (+3231) 00:11:14.079 QEMU NVMe Ctrl (12341 ): 32889 I/Os completed (+3153) 00:11:14.079 00:11:15.009 QEMU NVMe Ctrl (12340 ): 35119 I/Os completed (+3150) 00:11:15.009 QEMU NVMe Ctrl (12341 ): 36508 I/Os completed (+3619) 00:11:15.009 00:11:15.939 QEMU NVMe Ctrl (12340 ): 38099 I/Os completed (+2980) 00:11:15.939 QEMU NVMe Ctrl (12341 ): 39536 I/Os completed (+3028) 00:11:15.939 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.198 [2024-12-06 22:01:49.038919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.198 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:16.198 [2024-12-06 22:01:49.040387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.040528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.040570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.040637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:16.198 [2024-12-06 22:01:49.042664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.042734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.042764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.042837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.198 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.198 [2024-12-06 22:01:49.061369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:16.198 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:16.198 [2024-12-06 22:01:49.062600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.062644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.062663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.062680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:16.198 [2024-12-06 22:01:49.064532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.064659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.064684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.198 [2024-12-06 22:01:49.064698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:16.455 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:16.455 EAL: Scan for (pci) bus failed. 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.455 Attaching to 0000:00:10.0 00:11:16.455 Attached to 0000:00:10.0 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.455 22:01:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.455 Attaching to 0000:00:11.0 00:11:16.455 Attached to 0000:00:11.0 00:11:16.455 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:16.455 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:16.455 [2024-12-06 22:01:49.303836] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:28.714 22:02:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:28.714 22:02:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.714 22:02:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:11:28.714 22:02:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:11:28.714 22:02:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:28.714 22:02:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:11:28.714 22:02:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:11:28.714 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 22:02:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66767 00:11:35.302 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66767) - No such process 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66767 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67315 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:35.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.302 22:02:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67315 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67315 ']' 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.302 22:02:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.302 [2024-12-06 22:02:07.388616] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:11:35.302 [2024-12-06 22:02:07.388730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67315 ] 00:11:35.302 [2024-12-06 22:02:07.545647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.302 [2024-12-06 22:02:07.654787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:35.560 22:02:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:35.560 22:02:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.116 22:02:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.116 22:02:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.116 22:02:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.116 [2024-12-06 22:02:14.398238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:42.116 [2024-12-06 22:02:14.399481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.399519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.399532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.399552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.399560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.399568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.399575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.399584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.399590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.399600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.399607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.399614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.116 22:02:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.116 22:02:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.116 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.116 [2024-12-06 22:02:14.898251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:42.116 [2024-12-06 22:02:14.900198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.900345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.900369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.900392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.900407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.900419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.900434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.116 [2024-12-06 22:02:14.900445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.116 [2024-12-06 22:02:14.900457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.116 [2024-12-06 22:02:14.900469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.117 [2024-12-06 22:02:14.900482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.117 [2024-12-06 22:02:14.900493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.117 22:02:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.117 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:42.117 22:02:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.682 22:02:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.682 22:02:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.682 22:02:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.682 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.940 22:02:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.183 [2024-12-06 22:02:27.798434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:55.183 [2024-12-06 22:02:27.799971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.183 [2024-12-06 22:02:27.800014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.183 [2024-12-06 22:02:27.800027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.183 [2024-12-06 22:02:27.800053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.183 [2024-12-06 22:02:27.800067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.183 [2024-12-06 22:02:27.800076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.183 [2024-12-06 22:02:27.800084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.183 [2024-12-06 22:02:27.800093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.183 [2024-12-06 22:02:27.800099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.183 [2024-12-06 22:02:27.800108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.183 [2024-12-06 22:02:27.800114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.183 [2024-12-06 22:02:27.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.183 22:02:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.183 22:02:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.441 [2024-12-06 22:02:28.298420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:55.441 [2024-12-06 22:02:28.299706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.441 [2024-12-06 22:02:28.299743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.441 [2024-12-06 22:02:28.299758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.441 [2024-12-06 22:02:28.299776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.441 [2024-12-06 22:02:28.299786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.441 [2024-12-06 22:02:28.299794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.441 [2024-12-06 22:02:28.299812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.441 [2024-12-06 22:02:28.299819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.441 [2024-12-06 22:02:28.299827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.441 [2024-12-06 22:02:28.299835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.441 [2024-12-06 22:02:28.299843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.441 [2024-12-06 22:02:28.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.699 22:02:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.699 22:02:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.699 22:02:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.699 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.700 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.700 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.700 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.700 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.700 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.957 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.957 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.957 22:02:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.151 22:02:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.151 22:02:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.151 22:02:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.151 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.152 22:02:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.152 22:02:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.152 22:02:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:08.152 22:02:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.152 [2024-12-06 22:02:40.699168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:08.152 [2024-12-06 22:02:40.700578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-06 22:02:40.700689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-06 22:02:40.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-06 22:02:40.700808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-06 22:02:40.700829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-06 22:02:40.700895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-06 22:02:40.700922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-06 22:02:40.700941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-06 22:02:40.700964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-06 22:02:40.701045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-06 22:02:40.701066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-06 22:02:40.701092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.410 [2024-12-06 22:02:41.099159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:08.410 [2024-12-06 22:02:41.100467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.410 [2024-12-06 22:02:41.100571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.410 [2024-12-06 22:02:41.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.410 [2024-12-06 22:02:41.100699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.410 [2024-12-06 22:02:41.100721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.410 [2024-12-06 22:02:41.100772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.410 [2024-12-06 22:02:41.100801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.410 [2024-12-06 22:02:41.100818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.410 [2024-12-06 22:02:41.100882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.410 [2024-12-06 22:02:41.100907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.410 [2024-12-06 22:02:41.100925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.410 [2024-12-06 22:02:41.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.410 22:02:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.410 22:02:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 22:02:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:08.410 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.669 22:02:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:12:20.937 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:20.937 22:02:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:20.937 22:02:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.514 22:02:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.514 22:02:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.514 22:02:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:27.514 22:02:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:27.514 [2024-12-06 22:02:59.592780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:27.514 [2024-12-06 22:02:59.594109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.594146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.594158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.594191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.594200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.594210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.594219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.594229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.594236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.594244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.594251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.594262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.992770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:27.514 [2024-12-06 22:02:59.994021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.994054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.994067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.994078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.994087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.994095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.994105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.994112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.994122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 [2024-12-06 22:02:59.994129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.514 [2024-12-06 22:02:59.994137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.514 [2024-12-06 22:02:59.994144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.514 22:03:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.514 22:03:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.514 22:03:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.514 22:03:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.704 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.705 [2024-12-06 22:03:12.392988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:39.705 [2024-12-06 22:03:12.394279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.705 [2024-12-06 22:03:12.394442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.705 [2024-12-06 22:03:12.394458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.705 [2024-12-06 22:03:12.394479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.705 [2024-12-06 22:03:12.394487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.705 [2024-12-06 22:03:12.394496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.705 [2024-12-06 22:03:12.394503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.705 [2024-12-06 22:03:12.394512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.705 [2024-12-06 22:03:12.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.705 [2024-12-06 22:03:12.394527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.705 [2024-12-06 22:03:12.394534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.705 [2024-12-06 22:03:12.394543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.705 22:03:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:39.705 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:40.272 [2024-12-06 22:03:12.892987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:40.272 [2024-12-06 22:03:12.894273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.272 [2024-12-06 22:03:12.894303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.272 [2024-12-06 22:03:12.894316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.272 [2024-12-06 22:03:12.894329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.272 [2024-12-06 22:03:12.894340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.272 [2024-12-06 22:03:12.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.272 [2024-12-06 22:03:12.894357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.272 [2024-12-06 22:03:12.894364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.272 [2024-12-06 22:03:12.894373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.272 [2024-12-06 22:03:12.894381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.272 [2024-12-06 22:03:12.894389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.272 [2024-12-06 22:03:12.894396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.272 22:03:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.272 22:03:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.272 22:03:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:40.272 22:03:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.272 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:40.530 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:40.530 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.530 22:03:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.755 [2024-12-06 22:03:25.293203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:52.755 [2024-12-06 22:03:25.294334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.755 [2024-12-06 22:03:25.294463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.755 [2024-12-06 22:03:25.294523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.755 [2024-12-06 22:03:25.294838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.755 [2024-12-06 22:03:25.294872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.755 [2024-12-06 22:03:25.294904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.755 [2024-12-06 22:03:25.294930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.755 [2024-12-06 22:03:25.294951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.755 [2024-12-06 22:03:25.294975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.755 [2024-12-06 22:03:25.295047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.755 [2024-12-06 22:03:25.295068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.755 [2024-12-06 22:03:25.295093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.755 22:03:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:52.755 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:53.013 [2024-12-06 22:03:25.793191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:53.013 [2024-12-06 22:03:25.794133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.013 [2024-12-06 22:03:25.794165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.013 [2024-12-06 22:03:25.794189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.013 [2024-12-06 22:03:25.794202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.014 [2024-12-06 22:03:25.794211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.014 [2024-12-06 22:03:25.794219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.014 [2024-12-06 22:03:25.794229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.014 [2024-12-06 22:03:25.794236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.014 [2024-12-06 22:03:25.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.014 [2024-12-06 22:03:25.794252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.014 [2024-12-06 22:03:25.794262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.014 [2024-12-06 22:03:25.794269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.014 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.014 22:03:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.014 22:03:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.014 22:03:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.271 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:53.271 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:53.271 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.271 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.271 22:03:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:53.271 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:53.271 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.271 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.271 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.271 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:53.529 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:53.529 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.529 22:03:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.71 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.71 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.71 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.71 2 00:13:05.729 remove_attach_helper took 44.71s to complete (handling 2 nvme drive(s)) 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:05.729 22:03:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67315 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67315 ']' 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67315 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67315 00:13:05.729 killing process with pid 67315 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67315' 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67315 00:13:05.729 22:03:38 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67315 00:13:07.105 22:03:39 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:07.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:07.672 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:07.672 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:07.672 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.672 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.672 ************************************ 00:13:07.672 END TEST sw_hotplug 00:13:07.672 00:13:07.672 real 2m29.420s 00:13:07.672 user 1m52.217s 00:13:07.672 sys 0m15.852s 00:13:07.672 22:03:40 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.672 22:03:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.672 ************************************ 00:13:07.933 22:03:40 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:07.933 22:03:40 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:07.933 22:03:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.933 22:03:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.933 22:03:40 -- common/autotest_common.sh@10 -- # set +x 00:13:07.933 ************************************ 00:13:07.933 START TEST nvme_xnvme 00:13:07.933 ************************************ 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:07.933 * Looking for test storage... 00:13:07.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.933 22:03:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.933 --rc genhtml_branch_coverage=1 00:13:07.933 --rc genhtml_function_coverage=1 00:13:07.933 --rc genhtml_legend=1 00:13:07.933 --rc geninfo_all_blocks=1 00:13:07.933 --rc geninfo_unexecuted_blocks=1 00:13:07.933 00:13:07.933 ' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.933 --rc genhtml_branch_coverage=1 00:13:07.933 --rc genhtml_function_coverage=1 00:13:07.933 --rc genhtml_legend=1 00:13:07.933 --rc geninfo_all_blocks=1 00:13:07.933 --rc geninfo_unexecuted_blocks=1 00:13:07.933 00:13:07.933 ' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.933 --rc genhtml_branch_coverage=1 00:13:07.933 --rc genhtml_function_coverage=1 00:13:07.933 --rc genhtml_legend=1 00:13:07.933 --rc geninfo_all_blocks=1 00:13:07.933 --rc geninfo_unexecuted_blocks=1 00:13:07.933 00:13:07.933 ' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.933 --rc genhtml_branch_coverage=1 00:13:07.933 --rc genhtml_function_coverage=1 00:13:07.933 --rc genhtml_legend=1 00:13:07.933 --rc geninfo_all_blocks=1 00:13:07.933 --rc geninfo_unexecuted_blocks=1 00:13:07.933 00:13:07.933 ' 00:13:07.933 22:03:40 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:07.933 22:03:40 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:07.933 22:03:40 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:07.933 22:03:40 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:07.934 22:03:40 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:07.934 22:03:40 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:07.934 22:03:40 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:07.934 #define SPDK_CONFIG_H 00:13:07.934 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:07.934 #define SPDK_CONFIG_APPS 1 00:13:07.934 #define SPDK_CONFIG_ARCH native 00:13:07.934 #define SPDK_CONFIG_ASAN 1 00:13:07.934 #undef SPDK_CONFIG_AVAHI 00:13:07.934 #undef SPDK_CONFIG_CET 00:13:07.934 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:07.934 #define SPDK_CONFIG_COVERAGE 1 00:13:07.934 #define SPDK_CONFIG_CROSS_PREFIX 00:13:07.934 #undef SPDK_CONFIG_CRYPTO 00:13:07.934 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:07.934 #undef SPDK_CONFIG_CUSTOMOCF 00:13:07.934 #undef SPDK_CONFIG_DAOS 00:13:07.934 #define SPDK_CONFIG_DAOS_DIR 00:13:07.934 #define SPDK_CONFIG_DEBUG 1 00:13:07.934 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:07.934 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:07.934 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:07.934 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:07.934 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:07.934 #undef SPDK_CONFIG_DPDK_UADK 00:13:07.934 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:07.934 #define SPDK_CONFIG_EXAMPLES 1 00:13:07.934 #undef SPDK_CONFIG_FC 00:13:07.934 #define SPDK_CONFIG_FC_PATH 00:13:07.934 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:07.934 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:07.934 #define SPDK_CONFIG_FSDEV 1 00:13:07.934 #undef SPDK_CONFIG_FUSE 00:13:07.934 #undef SPDK_CONFIG_FUZZER 00:13:07.934 #define SPDK_CONFIG_FUZZER_LIB 00:13:07.934 #undef SPDK_CONFIG_GOLANG 00:13:07.934 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:07.934 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:07.934 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:07.934 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:07.934 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:07.934 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:07.934 #undef SPDK_CONFIG_HAVE_LZ4 00:13:07.934 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:07.934 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:07.934 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:07.934 #define SPDK_CONFIG_IDXD 1 00:13:07.934 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:07.934 #undef SPDK_CONFIG_IPSEC_MB 00:13:07.934 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:07.934 #define SPDK_CONFIG_ISAL 1 00:13:07.934 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:07.934 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:07.934 #define SPDK_CONFIG_LIBDIR 00:13:07.934 #undef SPDK_CONFIG_LTO 00:13:07.934 #define SPDK_CONFIG_MAX_LCORES 128 00:13:07.934 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:07.934 #define SPDK_CONFIG_NVME_CUSE 1 00:13:07.934 #undef SPDK_CONFIG_OCF 00:13:07.934 #define SPDK_CONFIG_OCF_PATH 00:13:07.934 #define SPDK_CONFIG_OPENSSL_PATH 00:13:07.934 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:07.935 #define SPDK_CONFIG_PGO_DIR 00:13:07.935 #undef SPDK_CONFIG_PGO_USE 00:13:07.935 #define SPDK_CONFIG_PREFIX /usr/local 00:13:07.935 #undef SPDK_CONFIG_RAID5F 00:13:07.935 #undef SPDK_CONFIG_RBD 00:13:07.935 #define SPDK_CONFIG_RDMA 1 00:13:07.935 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:07.935 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:07.935 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:07.935 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:07.935 #define SPDK_CONFIG_SHARED 1 00:13:07.935 #undef SPDK_CONFIG_SMA 00:13:07.935 #define SPDK_CONFIG_TESTS 1 00:13:07.935 #undef SPDK_CONFIG_TSAN 00:13:07.935 #define SPDK_CONFIG_UBLK 1 00:13:07.935 #define SPDK_CONFIG_UBSAN 1 00:13:07.935 #undef SPDK_CONFIG_UNIT_TESTS 00:13:07.935 #undef SPDK_CONFIG_URING 00:13:07.935 #define SPDK_CONFIG_URING_PATH 00:13:07.935 #undef SPDK_CONFIG_URING_ZNS 00:13:07.935 #undef SPDK_CONFIG_USDT 00:13:07.935 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:07.935 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:07.935 #undef SPDK_CONFIG_VFIO_USER 00:13:07.935 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:07.935 #define SPDK_CONFIG_VHOST 1 00:13:07.935 #define SPDK_CONFIG_VIRTIO 1 00:13:07.935 #undef SPDK_CONFIG_VTUNE 00:13:07.935 #define SPDK_CONFIG_VTUNE_DIR 00:13:07.935 #define SPDK_CONFIG_WERROR 1 00:13:07.935 #define SPDK_CONFIG_WPDK_DIR 00:13:07.935 #define SPDK_CONFIG_XNVME 1 00:13:07.935 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:07.935 22:03:40 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.935 22:03:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.935 22:03:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.935 22:03:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.935 22:03:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.935 22:03:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.935 22:03:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.935 22:03:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.935 22:03:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:07.935 22:03:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:07.935 22:03:40 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:07.935 22:03:40 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:08.197 22:03:40 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:08.198 22:03:40 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68668 ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68668 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qEyIv6 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.qEyIv6/tests/xnvme /tmp/spdk.qEyIv6 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13965488128 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5602889728 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13965488128 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5602889728 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97250267136 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2452512768 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:08.199 * Looking for test storage... 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.199 22:03:40 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13965488128 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.200 --rc genhtml_branch_coverage=1 00:13:08.200 --rc genhtml_function_coverage=1 00:13:08.200 --rc genhtml_legend=1 00:13:08.200 --rc geninfo_all_blocks=1 00:13:08.200 --rc geninfo_unexecuted_blocks=1 00:13:08.200 00:13:08.200 ' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.200 --rc genhtml_branch_coverage=1 00:13:08.200 --rc genhtml_function_coverage=1 00:13:08.200 --rc genhtml_legend=1 00:13:08.200 --rc geninfo_all_blocks=1 00:13:08.200 --rc geninfo_unexecuted_blocks=1 00:13:08.200 00:13:08.200 ' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.200 --rc genhtml_branch_coverage=1 00:13:08.200 --rc genhtml_function_coverage=1 00:13:08.200 --rc genhtml_legend=1 00:13:08.200 --rc geninfo_all_blocks=1 00:13:08.200 --rc geninfo_unexecuted_blocks=1 00:13:08.200 00:13:08.200 ' 00:13:08.200 22:03:40 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.200 --rc genhtml_branch_coverage=1 00:13:08.200 --rc genhtml_function_coverage=1 00:13:08.200 --rc genhtml_legend=1 00:13:08.200 --rc geninfo_all_blocks=1 00:13:08.200 --rc geninfo_unexecuted_blocks=1 00:13:08.200 00:13:08.200 ' 00:13:08.200 22:03:40 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.200 22:03:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.200 22:03:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.200 22:03:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.200 22:03:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.200 22:03:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:08.200 22:03:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:08.200 22:03:40 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:08.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.722 Waiting for block devices as requested 00:13:08.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.982 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.982 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.257 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:14.257 22:03:46 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:14.517 22:03:47 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:14.517 22:03:47 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:14.517 22:03:47 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:14.517 22:03:47 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:14.517 22:03:47 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:14.517 22:03:47 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:14.517 22:03:47 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:14.775 No valid GPT data, bailing 00:13:14.775 22:03:47 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:14.775 22:03:47 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:14.775 22:03:47 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:14.775 22:03:47 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:14.776 22:03:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:14.776 22:03:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.776 22:03:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.776 22:03:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.776 ************************************ 00:13:14.776 START TEST xnvme_rpc 00:13:14.776 ************************************ 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69054 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69054 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69054 ']' 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:14.776 22:03:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.776 [2024-12-06 22:03:47.548269] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:14.776 [2024-12-06 22:03:47.548400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69054 ] 00:13:15.034 [2024-12-06 22:03:47.707643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.034 [2024-12-06 22:03:47.809399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.600 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.601 xnvme_bdev 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.601 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.859 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69054 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69054 ']' 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69054 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69054 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.860 killing process with pid 69054 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69054' 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69054 00:13:15.860 22:03:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69054 00:13:17.282 00:13:17.282 real 0m2.632s 00:13:17.282 user 0m2.706s 00:13:17.282 sys 0m0.362s 00:13:17.282 22:03:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.282 ************************************ 00:13:17.282 END TEST xnvme_rpc 00:13:17.282 22:03:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.282 ************************************ 00:13:17.282 22:03:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:17.282 22:03:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.282 22:03:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.282 22:03:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 ************************************ 00:13:17.539 START TEST xnvme_bdevperf 00:13:17.539 ************************************ 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:17.539 22:03:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 { 00:13:17.539 "subsystems": [ 00:13:17.539 { 00:13:17.539 "subsystem": "bdev", 00:13:17.539 "config": [ 00:13:17.539 { 00:13:17.539 "params": { 00:13:17.539 "io_mechanism": "libaio", 00:13:17.539 "conserve_cpu": false, 00:13:17.539 "filename": "/dev/nvme0n1", 00:13:17.539 "name": "xnvme_bdev" 00:13:17.539 }, 00:13:17.539 "method": "bdev_xnvme_create" 00:13:17.539 }, 00:13:17.539 { 00:13:17.539 "method": "bdev_wait_for_examine" 00:13:17.539 } 00:13:17.539 ] 00:13:17.539 } 00:13:17.539 ] 00:13:17.539 } 00:13:17.539 [2024-12-06 22:03:50.229440] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:17.539 [2024-12-06 22:03:50.229554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69121 ] 00:13:17.539 [2024-12-06 22:03:50.385968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.799 [2024-12-06 22:03:50.464933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.059 Running I/O for 5 seconds... 00:13:20.004 34810.00 IOPS, 135.98 MiB/s [2024-12-06T22:03:53.848Z] 34157.00 IOPS, 133.43 MiB/s [2024-12-06T22:03:54.791Z] 32798.67 IOPS, 128.12 MiB/s [2024-12-06T22:03:55.732Z] 32592.00 IOPS, 127.31 MiB/s 00:13:22.860 Latency(us) 00:13:22.860 [2024-12-06T22:03:55.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.860 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:22.860 xnvme_bdev : 5.00 32735.51 127.87 0.00 0.00 1950.66 387.54 6856.07 00:13:22.860 [2024-12-06T22:03:55.732Z] =================================================================================================================== 00:13:22.860 [2024-12-06T22:03:55.732Z] Total : 32735.51 127.87 0.00 0.00 1950.66 387.54 6856.07 00:13:23.801 22:03:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:23.801 22:03:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:23.801 22:03:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:23.801 22:03:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:23.801 22:03:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.801 { 00:13:23.801 "subsystems": [ 00:13:23.801 { 00:13:23.801 "subsystem": "bdev", 00:13:23.801 "config": [ 00:13:23.801 { 00:13:23.801 "params": { 00:13:23.801 "io_mechanism": "libaio", 00:13:23.801 "conserve_cpu": false, 00:13:23.801 "filename": "/dev/nvme0n1", 00:13:23.801 "name": "xnvme_bdev" 00:13:23.801 }, 00:13:23.801 "method": "bdev_xnvme_create" 00:13:23.801 }, 00:13:23.801 { 00:13:23.801 "method": "bdev_wait_for_examine" 00:13:23.801 } 00:13:23.801 ] 00:13:23.801 } 00:13:23.801 ] 00:13:23.801 } 00:13:23.801 [2024-12-06 22:03:56.585939] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:23.801 [2024-12-06 22:03:56.586093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69199 ] 00:13:24.062 [2024-12-06 22:03:56.750710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.062 [2024-12-06 22:03:56.875228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.324 Running I/O for 5 seconds... 00:13:26.654 17162.00 IOPS, 67.04 MiB/s [2024-12-06T22:04:00.471Z] 10014.50 IOPS, 39.12 MiB/s [2024-12-06T22:04:01.494Z] 7621.00 IOPS, 29.77 MiB/s [2024-12-06T22:04:02.444Z] 6417.00 IOPS, 25.07 MiB/s [2024-12-06T22:04:02.444Z] 5632.00 IOPS, 22.00 MiB/s 00:13:29.572 Latency(us) 00:13:29.572 [2024-12-06T22:04:02.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.572 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:29.572 xnvme_bdev : 5.03 5615.44 21.94 0.00 0.00 11363.66 53.96 37103.46 00:13:29.572 [2024-12-06T22:04:02.444Z] =================================================================================================================== 00:13:29.572 [2024-12-06T22:04:02.444Z] Total : 5615.44 21.94 0.00 0.00 11363.66 53.96 37103.46 00:13:30.513 00:13:30.513 real 0m12.917s 00:13:30.513 user 0m8.049s 00:13:30.513 sys 0m3.808s 00:13:30.513 22:04:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.513 ************************************ 00:13:30.513 END TEST xnvme_bdevperf 00:13:30.513 ************************************ 00:13:30.513 22:04:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.513 22:04:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:30.513 22:04:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:30.513 22:04:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.513 22:04:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.513 ************************************ 00:13:30.513 START TEST xnvme_fio_plugin 00:13:30.513 ************************************ 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:30.513 22:04:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.513 { 00:13:30.513 "subsystems": [ 00:13:30.513 { 00:13:30.513 "subsystem": "bdev", 00:13:30.513 "config": [ 00:13:30.513 { 00:13:30.513 "params": { 00:13:30.513 "io_mechanism": "libaio", 00:13:30.513 "conserve_cpu": false, 00:13:30.513 "filename": "/dev/nvme0n1", 00:13:30.513 "name": "xnvme_bdev" 00:13:30.513 }, 00:13:30.513 "method": "bdev_xnvme_create" 00:13:30.513 }, 00:13:30.513 { 00:13:30.513 "method": "bdev_wait_for_examine" 00:13:30.513 } 00:13:30.513 ] 00:13:30.513 } 00:13:30.513 ] 00:13:30.513 } 00:13:30.513 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:30.513 fio-3.35 00:13:30.513 Starting 1 thread 00:13:37.103 00:13:37.103 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69318: Fri Dec 6 22:04:08 2024 00:13:37.103 read: IOPS=36.2k, BW=141MiB/s (148MB/s)(707MiB/5001msec) 00:13:37.103 slat (usec): min=4, max=1962, avg=19.95, stdev=84.78 00:13:37.103 clat (usec): min=85, max=5076, avg=1222.39, stdev=501.89 00:13:37.103 lat (usec): min=164, max=5467, avg=1242.34, stdev=495.45 00:13:37.103 clat percentiles (usec): 00:13:37.103 | 1.00th=[ 265], 5.00th=[ 465], 10.00th=[ 611], 20.00th=[ 816], 00:13:37.103 | 30.00th=[ 955], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1319], 00:13:37.103 | 70.00th=[ 1434], 80.00th=[ 1582], 90.00th=[ 1811], 95.00th=[ 2040], 00:13:37.103 | 99.00th=[ 2802], 99.50th=[ 3163], 99.90th=[ 3916], 99.95th=[ 4146], 00:13:37.103 | 99.99th=[ 4490] 00:13:37.103 bw ( KiB/s): min=129744, max=173704, per=100.00%, avg=144845.33, stdev=14277.41, samples=9 00:13:37.103 iops : min=32436, max=43426, avg=36211.33, stdev=3569.35, samples=9 00:13:37.103 lat (usec) : 100=0.01%, 250=0.78%, 500=5.25%, 750=10.21%, 1000=17.34% 00:13:37.103 lat (msec) : 2=60.91%, 4=5.45%, 10=0.07% 00:13:37.103 cpu : usr=41.88%, sys=48.52%, ctx=10, majf=0, minf=764 00:13:37.103 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.5%, 16=22.9%, 32=61.8%, >=64=2.1% 00:13:37.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.103 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:37.103 issued rwts: total=181055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:37.103 00:13:37.103 Run status group 0 (all jobs): 00:13:37.103 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=707MiB (742MB), run=5001-5001msec 00:13:37.365 ----------------------------------------------------- 00:13:37.365 Suppressions used: 00:13:37.365 count bytes template 00:13:37.365 1 11 /usr/src/fio/parse.c 00:13:37.365 1 8 libtcmalloc_minimal.so 00:13:37.365 1 904 libcrypto.so 00:13:37.365 ----------------------------------------------------- 00:13:37.365 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:37.365 22:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.365 { 00:13:37.365 "subsystems": [ 00:13:37.365 { 00:13:37.365 "subsystem": "bdev", 00:13:37.365 "config": [ 00:13:37.365 { 00:13:37.365 "params": { 00:13:37.365 "io_mechanism": "libaio", 00:13:37.365 "conserve_cpu": false, 00:13:37.365 "filename": "/dev/nvme0n1", 00:13:37.365 "name": "xnvme_bdev" 00:13:37.365 }, 00:13:37.365 "method": "bdev_xnvme_create" 00:13:37.365 }, 00:13:37.365 { 00:13:37.365 "method": "bdev_wait_for_examine" 00:13:37.365 } 00:13:37.365 ] 00:13:37.365 } 00:13:37.365 ] 00:13:37.365 } 00:13:37.627 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:37.627 fio-3.35 00:13:37.627 Starting 1 thread 00:13:44.295 00:13:44.295 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69411: Fri Dec 6 22:04:16 2024 00:13:44.295 write: IOPS=37.6k, BW=147MiB/s (154MB/s)(735MiB/5001msec); 0 zone resets 00:13:44.295 slat (usec): min=4, max=1623, avg=21.13, stdev=68.82 00:13:44.295 clat (usec): min=105, max=9973, avg=1133.35, stdev=555.99 00:13:44.295 lat (usec): min=170, max=9978, avg=1154.48, stdev=553.62 00:13:44.295 clat percentiles (usec): 00:13:44.295 | 1.00th=[ 251], 5.00th=[ 404], 10.00th=[ 523], 20.00th=[ 709], 00:13:44.295 | 30.00th=[ 840], 40.00th=[ 947], 50.00th=[ 1057], 60.00th=[ 1172], 00:13:44.295 | 70.00th=[ 1303], 80.00th=[ 1467], 90.00th=[ 1795], 95.00th=[ 2147], 00:13:44.295 | 99.00th=[ 3032], 99.50th=[ 3326], 99.90th=[ 4080], 99.95th=[ 4424], 00:13:44.295 | 99.99th=[ 8455] 00:13:44.295 bw ( KiB/s): min=130176, max=177104, per=99.52%, avg=149816.00, stdev=14094.83, samples=9 00:13:44.295 iops : min=32544, max=44276, avg=37454.00, stdev=3523.71, samples=9 00:13:44.295 lat (usec) : 250=1.00%, 500=7.99%, 750=13.87%, 1000=22.15% 00:13:44.295 lat (msec) : 2=48.58%, 4=6.30%, 10=0.11% 00:13:44.295 cpu : usr=33.48%, sys=51.54%, ctx=33, majf=0, minf=765 00:13:44.295 IO depths : 1=0.2%, 2=0.8%, 4=2.8%, 8=9.1%, 16=24.4%, 32=60.6%, >=64=2.1% 00:13:44.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.295 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:44.295 issued rwts: total=0,188214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.295 00:13:44.295 Run status group 0 (all jobs): 00:13:44.295 WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=735MiB (771MB), run=5001-5001msec 00:13:44.295 ----------------------------------------------------- 00:13:44.295 Suppressions used: 00:13:44.295 count bytes template 00:13:44.295 1 11 /usr/src/fio/parse.c 00:13:44.295 1 8 libtcmalloc_minimal.so 00:13:44.295 1 904 libcrypto.so 00:13:44.295 ----------------------------------------------------- 00:13:44.295 00:13:44.295 00:13:44.295 real 0m13.901s 00:13:44.295 user 0m6.653s 00:13:44.295 sys 0m5.633s 00:13:44.295 22:04:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.295 ************************************ 00:13:44.295 END TEST xnvme_fio_plugin 00:13:44.295 ************************************ 00:13:44.295 22:04:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.295 22:04:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:44.295 22:04:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:44.295 22:04:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:44.295 22:04:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:44.295 22:04:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.295 22:04:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.295 22:04:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.295 ************************************ 00:13:44.295 START TEST xnvme_rpc 00:13:44.295 ************************************ 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69496 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69496 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69496 ']' 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.295 22:04:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:44.556 [2024-12-06 22:04:17.204629] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:44.556 [2024-12-06 22:04:17.204784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69496 ] 00:13:44.556 [2024-12-06 22:04:17.370416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.817 [2024-12-06 22:04:17.500018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.391 xnvme_bdev 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.391 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69496 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69496 ']' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69496 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69496 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.652 killing process with pid 69496 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69496' 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69496 00:13:45.652 22:04:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69496 00:13:47.570 00:13:47.570 real 0m3.139s 00:13:47.570 user 0m3.143s 00:13:47.570 sys 0m0.474s 00:13:47.570 22:04:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.570 ************************************ 00:13:47.570 END TEST xnvme_rpc 00:13:47.570 ************************************ 00:13:47.570 22:04:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.570 22:04:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:47.570 22:04:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.570 22:04:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.570 22:04:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 ************************************ 00:13:47.571 START TEST xnvme_bdevperf 00:13:47.571 ************************************ 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.571 22:04:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 { 00:13:47.571 "subsystems": [ 00:13:47.571 { 00:13:47.571 "subsystem": "bdev", 00:13:47.571 "config": [ 00:13:47.571 { 00:13:47.571 "params": { 00:13:47.571 "io_mechanism": "libaio", 00:13:47.571 "conserve_cpu": true, 00:13:47.571 "filename": "/dev/nvme0n1", 00:13:47.571 "name": "xnvme_bdev" 00:13:47.571 }, 00:13:47.571 "method": "bdev_xnvme_create" 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "method": "bdev_wait_for_examine" 00:13:47.571 } 00:13:47.571 ] 00:13:47.571 } 00:13:47.571 ] 00:13:47.571 } 00:13:47.571 [2024-12-06 22:04:20.398038] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:47.571 [2024-12-06 22:04:20.398270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69570 ] 00:13:47.833 [2024-12-06 22:04:20.571230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.093 [2024-12-06 22:04:20.734866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.355 Running I/O for 5 seconds... 00:13:50.244 34535.00 IOPS, 134.90 MiB/s [2024-12-06T22:04:24.504Z] 33921.50 IOPS, 132.51 MiB/s [2024-12-06T22:04:25.447Z] 34984.67 IOPS, 136.66 MiB/s [2024-12-06T22:04:26.390Z] 34746.75 IOPS, 135.73 MiB/s [2024-12-06T22:04:26.390Z] 34390.00 IOPS, 134.34 MiB/s 00:13:53.518 Latency(us) 00:13:53.518 [2024-12-06T22:04:26.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.518 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:53.518 xnvme_bdev : 5.01 34332.16 134.11 0.00 0.00 1857.96 81.92 17140.18 00:13:53.518 [2024-12-06T22:04:26.390Z] =================================================================================================================== 00:13:53.518 [2024-12-06T22:04:26.390Z] Total : 34332.16 134.11 0.00 0.00 1857.96 81.92 17140.18 00:13:54.141 22:04:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.141 22:04:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:54.141 22:04:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.141 22:04:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.141 22:04:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.141 { 00:13:54.141 "subsystems": [ 00:13:54.141 { 00:13:54.141 "subsystem": "bdev", 00:13:54.141 "config": [ 00:13:54.141 { 00:13:54.141 "params": { 00:13:54.141 "io_mechanism": "libaio", 00:13:54.141 "conserve_cpu": true, 00:13:54.141 "filename": "/dev/nvme0n1", 00:13:54.141 "name": "xnvme_bdev" 00:13:54.141 }, 00:13:54.141 "method": "bdev_xnvme_create" 00:13:54.141 }, 00:13:54.141 { 00:13:54.141 "method": "bdev_wait_for_examine" 00:13:54.141 } 00:13:54.141 ] 00:13:54.141 } 00:13:54.141 ] 00:13:54.141 } 00:13:54.141 [2024-12-06 22:04:27.006188] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:13:54.141 [2024-12-06 22:04:27.006468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69645 ] 00:13:54.402 [2024-12-06 22:04:27.192474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.662 [2024-12-06 22:04:27.320045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.922 Running I/O for 5 seconds... 00:13:56.808 3468.00 IOPS, 13.55 MiB/s [2024-12-06T22:04:31.068Z] 3430.50 IOPS, 13.40 MiB/s [2024-12-06T22:04:31.635Z] 3372.33 IOPS, 13.17 MiB/s [2024-12-06T22:04:33.030Z] 3427.50 IOPS, 13.39 MiB/s [2024-12-06T22:04:33.030Z] 3426.20 IOPS, 13.38 MiB/s 00:14:00.158 Latency(us) 00:14:00.158 [2024-12-06T22:04:33.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.159 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:00.159 xnvme_bdev : 5.02 3426.82 13.39 0.00 0.00 18635.51 60.65 42144.69 00:14:00.159 [2024-12-06T22:04:33.031Z] =================================================================================================================== 00:14:00.159 [2024-12-06T22:04:33.031Z] Total : 3426.82 13.39 0.00 0.00 18635.51 60.65 42144.69 00:14:00.724 00:14:00.724 real 0m13.170s 00:14:00.724 user 0m8.515s 00:14:00.724 sys 0m3.487s 00:14:00.724 22:04:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.724 ************************************ 00:14:00.724 END TEST xnvme_bdevperf 00:14:00.724 ************************************ 00:14:00.724 22:04:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.724 22:04:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:00.724 22:04:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.724 22:04:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.724 22:04:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.724 ************************************ 00:14:00.724 START TEST xnvme_fio_plugin 00:14:00.724 ************************************ 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:00.724 22:04:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.724 { 00:14:00.724 "subsystems": [ 00:14:00.724 { 00:14:00.724 "subsystem": "bdev", 00:14:00.724 "config": [ 00:14:00.724 { 00:14:00.724 "params": { 00:14:00.724 "io_mechanism": "libaio", 00:14:00.724 "conserve_cpu": true, 00:14:00.724 "filename": "/dev/nvme0n1", 00:14:00.724 "name": "xnvme_bdev" 00:14:00.724 }, 00:14:00.724 "method": "bdev_xnvme_create" 00:14:00.724 }, 00:14:00.724 { 00:14:00.724 "method": "bdev_wait_for_examine" 00:14:00.724 } 00:14:00.724 ] 00:14:00.724 } 00:14:00.724 ] 00:14:00.724 } 00:14:00.983 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:00.983 fio-3.35 00:14:00.983 Starting 1 thread 00:14:07.544 00:14:07.544 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69764: Fri Dec 6 22:04:39 2024 00:14:07.544 read: IOPS=46.8k, BW=183MiB/s (192MB/s)(915MiB/5005msec) 00:14:07.544 slat (usec): min=3, max=1531, avg=16.87, stdev=45.43 00:14:07.544 clat (usec): min=39, max=12236, avg=897.59, stdev=525.56 00:14:07.544 lat (usec): min=59, max=12241, avg=914.46, stdev=524.57 00:14:07.544 clat percentiles (usec): 00:14:07.544 | 1.00th=[ 190], 5.00th=[ 273], 10.00th=[ 359], 20.00th=[ 498], 00:14:07.544 | 30.00th=[ 611], 40.00th=[ 717], 50.00th=[ 824], 60.00th=[ 930], 00:14:07.544 | 70.00th=[ 1045], 80.00th=[ 1205], 90.00th=[ 1450], 95.00th=[ 1729], 00:14:07.544 | 99.00th=[ 2769], 99.50th=[ 3163], 99.90th=[ 5014], 99.95th=[ 6390], 00:14:07.544 | 99.99th=[ 8160] 00:14:07.544 bw ( KiB/s): min=159512, max=216944, per=100.00%, avg=187304.00, stdev=18397.29, samples=10 00:14:07.544 iops : min=39878, max=54236, avg=46826.00, stdev=4599.32, samples=10 00:14:07.544 lat (usec) : 50=0.01%, 100=0.01%, 250=3.48%, 500=16.80%, 750=22.35% 00:14:07.544 lat (usec) : 1000=23.70% 00:14:07.544 lat (msec) : 2=30.55%, 4=2.91%, 10=0.20%, 20=0.01% 00:14:07.544 cpu : usr=35.73%, sys=52.82%, ctx=10, majf=0, minf=764 00:14:07.544 IO depths : 1=0.2%, 2=0.8%, 4=3.0%, 8=9.0%, 16=24.1%, 32=60.9%, >=64=2.0% 00:14:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.544 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:14:07.544 issued rwts: total=234147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:07.544 00:14:07.544 Run status group 0 (all jobs): 00:14:07.544 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=915MiB (959MB), run=5005-5005msec 00:14:07.805 ----------------------------------------------------- 00:14:07.805 Suppressions used: 00:14:07.805 count bytes template 00:14:07.805 1 11 /usr/src/fio/parse.c 00:14:07.805 1 8 libtcmalloc_minimal.so 00:14:07.805 1 904 libcrypto.so 00:14:07.805 ----------------------------------------------------- 00:14:07.805 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:07.805 22:04:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.805 { 00:14:07.805 "subsystems": [ 00:14:07.805 { 00:14:07.805 "subsystem": "bdev", 00:14:07.805 "config": [ 00:14:07.805 { 00:14:07.805 "params": { 00:14:07.805 "io_mechanism": "libaio", 00:14:07.805 "conserve_cpu": true, 00:14:07.805 "filename": "/dev/nvme0n1", 00:14:07.805 "name": "xnvme_bdev" 00:14:07.805 }, 00:14:07.805 "method": "bdev_xnvme_create" 00:14:07.805 }, 00:14:07.805 { 00:14:07.805 "method": "bdev_wait_for_examine" 00:14:07.805 } 00:14:07.805 ] 00:14:07.805 } 00:14:07.805 ] 00:14:07.805 } 00:14:08.065 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:08.065 fio-3.35 00:14:08.065 Starting 1 thread 00:14:14.643 00:14:14.643 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69856: Fri Dec 6 22:04:46 2024 00:14:14.643 write: IOPS=34.2k, BW=134MiB/s (140MB/s)(668MiB/5001msec); 0 zone resets 00:14:14.643 slat (usec): min=4, max=2007, avg=22.12, stdev=86.27 00:14:14.643 clat (usec): min=69, max=5126, avg=1267.68, stdev=541.65 00:14:14.643 lat (usec): min=183, max=5131, avg=1289.80, stdev=534.93 00:14:14.643 clat percentiles (usec): 00:14:14.643 | 1.00th=[ 265], 5.00th=[ 441], 10.00th=[ 611], 20.00th=[ 816], 00:14:14.643 | 30.00th=[ 979], 40.00th=[ 1106], 50.00th=[ 1237], 60.00th=[ 1369], 00:14:14.643 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1909], 95.00th=[ 2180], 00:14:14.643 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 3916], 99.95th=[ 4178], 00:14:14.643 | 99.99th=[ 4621] 00:14:14.643 bw ( KiB/s): min=124712, max=148864, per=99.59%, avg=136152.00, stdev=6760.70, samples=9 00:14:14.643 iops : min=31178, max=37216, avg=34038.00, stdev=1690.17, samples=9 00:14:14.643 lat (usec) : 100=0.01%, 250=0.82%, 500=5.73%, 750=9.64%, 1000=15.43% 00:14:14.643 lat (msec) : 2=60.77%, 4=7.53%, 10=0.08% 00:14:14.643 cpu : usr=39.30%, sys=51.06%, ctx=9, majf=0, minf=765 00:14:14.643 IO depths : 1=0.4%, 2=1.1%, 4=3.2%, 8=9.0%, 16=23.8%, 32=60.5%, >=64=2.1% 00:14:14.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.643 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:14:14.643 issued rwts: total=0,170924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.643 00:14:14.643 Run status group 0 (all jobs): 00:14:14.643 WRITE: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=668MiB (700MB), run=5001-5001msec 00:14:14.903 ----------------------------------------------------- 00:14:14.903 Suppressions used: 00:14:14.903 count bytes template 00:14:14.903 1 11 /usr/src/fio/parse.c 00:14:14.903 1 8 libtcmalloc_minimal.so 00:14:14.903 1 904 libcrypto.so 00:14:14.903 ----------------------------------------------------- 00:14:14.903 00:14:14.903 00:14:14.903 real 0m14.066s 00:14:14.903 user 0m6.712s 00:14:14.903 sys 0m5.894s 00:14:14.903 22:04:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.903 ************************************ 00:14:14.903 END TEST xnvme_fio_plugin 00:14:14.903 ************************************ 00:14:14.903 22:04:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:14.903 22:04:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:14.903 22:04:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.903 22:04:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.903 22:04:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.903 ************************************ 00:14:14.903 START TEST xnvme_rpc 00:14:14.903 ************************************ 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69949 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69949 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69949 ']' 00:14:14.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.903 22:04:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.164 [2024-12-06 22:04:47.780522] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:15.164 [2024-12-06 22:04:47.780663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69949 ] 00:14:15.164 [2024-12-06 22:04:47.944379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.425 [2024-12-06 22:04:48.081522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.997 xnvme_bdev 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.997 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:16.257 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69949 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69949 ']' 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69949 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69949 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69949' 00:14:16.258 killing process with pid 69949 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69949 00:14:16.258 22:04:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69949 00:14:18.171 00:14:18.171 real 0m3.073s 00:14:18.171 user 0m3.047s 00:14:18.171 sys 0m0.510s 00:14:18.171 22:04:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.171 ************************************ 00:14:18.171 END TEST xnvme_rpc 00:14:18.171 ************************************ 00:14:18.171 22:04:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.171 22:04:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:18.171 22:04:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.171 22:04:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.171 22:04:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.171 ************************************ 00:14:18.171 START TEST xnvme_bdevperf 00:14:18.171 ************************************ 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.171 22:04:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.171 { 00:14:18.171 "subsystems": [ 00:14:18.171 { 00:14:18.171 "subsystem": "bdev", 00:14:18.171 "config": [ 00:14:18.171 { 00:14:18.171 "params": { 00:14:18.171 "io_mechanism": "io_uring", 00:14:18.171 "conserve_cpu": false, 00:14:18.171 "filename": "/dev/nvme0n1", 00:14:18.171 "name": "xnvme_bdev" 00:14:18.171 }, 00:14:18.171 "method": "bdev_xnvme_create" 00:14:18.171 }, 00:14:18.171 { 00:14:18.171 "method": "bdev_wait_for_examine" 00:14:18.171 } 00:14:18.171 ] 00:14:18.171 } 00:14:18.171 ] 00:14:18.171 } 00:14:18.171 [2024-12-06 22:04:50.909963] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:18.171 [2024-12-06 22:04:50.910124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70023 ] 00:14:18.431 [2024-12-06 22:04:51.079299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.431 [2024-12-06 22:04:51.222728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.690 Running I/O for 5 seconds... 00:14:21.017 33576.00 IOPS, 131.16 MiB/s [2024-12-06T22:04:54.830Z] 33069.00 IOPS, 129.18 MiB/s [2024-12-06T22:04:55.863Z] 33358.00 IOPS, 130.30 MiB/s [2024-12-06T22:04:56.804Z] 33382.75 IOPS, 130.40 MiB/s [2024-12-06T22:04:56.804Z] 33415.00 IOPS, 130.53 MiB/s 00:14:23.932 Latency(us) 00:14:23.932 [2024-12-06T22:04:56.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.932 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:23.932 xnvme_bdev : 5.01 33381.32 130.40 0.00 0.00 1911.94 293.02 34482.02 00:14:23.932 [2024-12-06T22:04:56.805Z] =================================================================================================================== 00:14:23.933 [2024-12-06T22:04:56.805Z] Total : 33381.32 130.40 0.00 0.00 1911.94 293.02 34482.02 00:14:24.874 22:04:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.874 22:04:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:24.874 22:04:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:24.874 22:04:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:24.874 22:04:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.874 { 00:14:24.874 "subsystems": [ 00:14:24.874 { 00:14:24.874 "subsystem": "bdev", 00:14:24.874 "config": [ 00:14:24.874 { 00:14:24.874 "params": { 00:14:24.874 "io_mechanism": "io_uring", 00:14:24.874 "conserve_cpu": false, 00:14:24.874 "filename": "/dev/nvme0n1", 00:14:24.874 "name": "xnvme_bdev" 00:14:24.874 }, 00:14:24.874 "method": "bdev_xnvme_create" 00:14:24.875 }, 00:14:24.875 { 00:14:24.875 "method": "bdev_wait_for_examine" 00:14:24.875 } 00:14:24.875 ] 00:14:24.875 } 00:14:24.875 ] 00:14:24.875 } 00:14:24.875 [2024-12-06 22:04:57.521415] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:24.875 [2024-12-06 22:04:57.521573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70098 ] 00:14:24.875 [2024-12-06 22:04:57.690289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.136 [2024-12-06 22:04:57.836067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.398 Running I/O for 5 seconds... 00:14:27.727 10516.00 IOPS, 41.08 MiB/s [2024-12-06T22:05:01.540Z] 10573.00 IOPS, 41.30 MiB/s [2024-12-06T22:05:02.484Z] 10431.00 IOPS, 40.75 MiB/s [2024-12-06T22:05:03.429Z] 10366.25 IOPS, 40.49 MiB/s [2024-12-06T22:05:03.429Z] 10509.40 IOPS, 41.05 MiB/s 00:14:30.557 Latency(us) 00:14:30.557 [2024-12-06T22:05:03.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.557 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:30.557 xnvme_bdev : 5.01 10502.98 41.03 0.00 0.00 6083.65 65.77 30449.03 00:14:30.557 [2024-12-06T22:05:03.429Z] =================================================================================================================== 00:14:30.557 [2024-12-06T22:05:03.429Z] Total : 10502.98 41.03 0.00 0.00 6083.65 65.77 30449.03 00:14:31.128 00:14:31.128 real 0m13.168s 00:14:31.128 user 0m5.944s 00:14:31.128 sys 0m6.950s 00:14:31.128 22:05:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.128 ************************************ 00:14:31.128 END TEST xnvme_bdevperf 00:14:31.128 ************************************ 00:14:31.128 22:05:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.390 22:05:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:31.390 22:05:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.390 22:05:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.390 22:05:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.390 ************************************ 00:14:31.390 START TEST xnvme_fio_plugin 00:14:31.390 ************************************ 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:31.390 22:05:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.390 { 00:14:31.390 "subsystems": [ 00:14:31.390 { 00:14:31.390 "subsystem": "bdev", 00:14:31.390 "config": [ 00:14:31.390 { 00:14:31.390 "params": { 00:14:31.390 "io_mechanism": "io_uring", 00:14:31.390 "conserve_cpu": false, 00:14:31.390 "filename": "/dev/nvme0n1", 00:14:31.390 "name": "xnvme_bdev" 00:14:31.390 }, 00:14:31.390 "method": "bdev_xnvme_create" 00:14:31.390 }, 00:14:31.390 { 00:14:31.390 "method": "bdev_wait_for_examine" 00:14:31.390 } 00:14:31.390 ] 00:14:31.390 } 00:14:31.390 ] 00:14:31.390 } 00:14:31.651 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:31.651 fio-3.35 00:14:31.651 Starting 1 thread 00:14:38.236 00:14:38.236 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70215: Fri Dec 6 22:05:09 2024 00:14:38.236 read: IOPS=34.2k, BW=134MiB/s (140MB/s)(669MiB/5001msec) 00:14:38.236 slat (nsec): min=2849, max=72185, avg=4098.23, stdev=2257.04 00:14:38.236 clat (usec): min=402, max=5682, avg=1701.85, stdev=266.64 00:14:38.236 lat (usec): min=406, max=5695, avg=1705.95, stdev=267.07 00:14:38.236 clat percentiles (usec): 00:14:38.236 | 1.00th=[ 1237], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1500], 00:14:38.236 | 30.00th=[ 1549], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:38.236 | 70.00th=[ 1795], 80.00th=[ 1893], 90.00th=[ 2040], 95.00th=[ 2180], 00:14:38.236 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2933], 99.95th=[ 3130], 00:14:38.236 | 99.99th=[ 5604] 00:14:38.236 bw ( KiB/s): min=127488, max=143360, per=99.78%, avg=136590.22, stdev=4842.24, samples=9 00:14:38.236 iops : min=31872, max=35840, avg=34147.56, stdev=1210.56, samples=9 00:14:38.236 lat (usec) : 500=0.01% 00:14:38.236 lat (msec) : 2=87.65%, 4=12.31%, 10=0.04% 00:14:38.236 cpu : usr=29.92%, sys=68.58%, ctx=13, majf=0, minf=762 00:14:38.236 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.236 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:38.236 issued rwts: total=171149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.236 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:38.236 00:14:38.236 Run status group 0 (all jobs): 00:14:38.236 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=669MiB (701MB), run=5001-5001msec 00:14:38.236 ----------------------------------------------------- 00:14:38.236 Suppressions used: 00:14:38.236 count bytes template 00:14:38.236 1 11 /usr/src/fio/parse.c 00:14:38.236 1 8 libtcmalloc_minimal.so 00:14:38.236 1 904 libcrypto.so 00:14:38.236 ----------------------------------------------------- 00:14:38.236 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:38.236 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.237 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.237 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:38.237 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.237 22:05:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.237 { 00:14:38.237 "subsystems": [ 00:14:38.237 { 00:14:38.237 "subsystem": "bdev", 00:14:38.237 "config": [ 00:14:38.237 { 00:14:38.237 "params": { 00:14:38.237 "io_mechanism": "io_uring", 00:14:38.237 "conserve_cpu": false, 00:14:38.237 "filename": "/dev/nvme0n1", 00:14:38.237 "name": "xnvme_bdev" 00:14:38.237 }, 00:14:38.237 "method": "bdev_xnvme_create" 00:14:38.237 }, 00:14:38.237 { 00:14:38.237 "method": "bdev_wait_for_examine" 00:14:38.237 } 00:14:38.237 ] 00:14:38.237 } 00:14:38.237 ] 00:14:38.237 } 00:14:38.499 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:38.499 fio-3.35 00:14:38.499 Starting 1 thread 00:14:45.087 00:14:45.087 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70310: Fri Dec 6 22:05:16 2024 00:14:45.087 write: IOPS=35.4k, BW=138MiB/s (145MB/s)(692MiB/5001msec); 0 zone resets 00:14:45.087 slat (usec): min=2, max=116, avg= 4.31, stdev= 2.19 00:14:45.087 clat (usec): min=336, max=6232, avg=1633.82, stdev=244.36 00:14:45.087 lat (usec): min=343, max=6235, avg=1638.13, stdev=244.80 00:14:45.087 clat percentiles (usec): 00:14:45.087 | 1.00th=[ 1205], 5.00th=[ 1319], 10.00th=[ 1369], 20.00th=[ 1434], 00:14:45.087 | 30.00th=[ 1500], 40.00th=[ 1549], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:45.087 | 70.00th=[ 1729], 80.00th=[ 1811], 90.00th=[ 1942], 95.00th=[ 2073], 00:14:45.087 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 3032], 99.95th=[ 3195], 00:14:45.087 | 99.99th=[ 3785] 00:14:45.087 bw ( KiB/s): min=130560, max=149416, per=99.89%, avg=141448.89, stdev=6938.72, samples=9 00:14:45.087 iops : min=32640, max=37354, avg=35362.22, stdev=1734.72, samples=9 00:14:45.087 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.06% 00:14:45.087 lat (msec) : 2=92.42%, 4=7.48%, 10=0.01% 00:14:45.087 cpu : usr=31.48%, sys=67.06%, ctx=18, majf=0, minf=763 00:14:45.087 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:14:45.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.087 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:45.087 issued rwts: total=0,177037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:45.087 00:14:45.087 Run status group 0 (all jobs): 00:14:45.087 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=692MiB (725MB), run=5001-5001msec 00:14:45.087 ----------------------------------------------------- 00:14:45.087 Suppressions used: 00:14:45.087 count bytes template 00:14:45.087 1 11 /usr/src/fio/parse.c 00:14:45.087 1 8 libtcmalloc_minimal.so 00:14:45.087 1 904 libcrypto.so 00:14:45.087 ----------------------------------------------------- 00:14:45.087 00:14:45.348 00:14:45.348 real 0m13.904s 00:14:45.348 user 0m6.091s 00:14:45.348 sys 0m7.341s 00:14:45.348 22:05:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.348 ************************************ 00:14:45.348 END TEST xnvme_fio_plugin 00:14:45.348 ************************************ 00:14:45.348 22:05:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:45.348 22:05:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:45.348 22:05:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:45.348 22:05:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:45.348 22:05:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:45.348 22:05:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.348 22:05:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.348 22:05:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.348 ************************************ 00:14:45.348 START TEST xnvme_rpc 00:14:45.348 ************************************ 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:45.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70391 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70391 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70391 ']' 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.349 22:05:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.349 [2024-12-06 22:05:18.138499] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:45.349 [2024-12-06 22:05:18.138896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70391 ] 00:14:45.610 [2024-12-06 22:05:18.300797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.610 [2024-12-06 22:05:18.434897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 xnvme_bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70391 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70391 ']' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70391 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70391 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.561 killing process with pid 70391 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70391' 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70391 00:14:46.561 22:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70391 00:14:48.471 00:14:48.471 real 0m3.180s 00:14:48.471 user 0m3.107s 00:14:48.471 sys 0m0.552s 00:14:48.471 22:05:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.471 ************************************ 00:14:48.471 END TEST xnvme_rpc 00:14:48.471 ************************************ 00:14:48.471 22:05:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.471 22:05:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:48.471 22:05:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.471 22:05:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.471 22:05:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.471 ************************************ 00:14:48.471 START TEST xnvme_bdevperf 00:14:48.471 ************************************ 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.471 22:05:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.732 { 00:14:48.732 "subsystems": [ 00:14:48.732 { 00:14:48.732 "subsystem": "bdev", 00:14:48.732 "config": [ 00:14:48.732 { 00:14:48.732 "params": { 00:14:48.732 "io_mechanism": "io_uring", 00:14:48.732 "conserve_cpu": true, 00:14:48.732 "filename": "/dev/nvme0n1", 00:14:48.732 "name": "xnvme_bdev" 00:14:48.732 }, 00:14:48.732 "method": "bdev_xnvme_create" 00:14:48.732 }, 00:14:48.732 { 00:14:48.732 "method": "bdev_wait_for_examine" 00:14:48.732 } 00:14:48.732 ] 00:14:48.732 } 00:14:48.732 ] 00:14:48.732 } 00:14:48.732 [2024-12-06 22:05:21.387855] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:48.732 [2024-12-06 22:05:21.388005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:14:48.732 [2024-12-06 22:05:21.553155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.992 [2024-12-06 22:05:21.709076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.252 Running I/O for 5 seconds... 00:14:51.580 35648.00 IOPS, 139.25 MiB/s [2024-12-06T22:05:25.395Z] 35456.00 IOPS, 138.50 MiB/s [2024-12-06T22:05:26.333Z] 34751.00 IOPS, 135.75 MiB/s [2024-12-06T22:05:27.271Z] 34681.00 IOPS, 135.47 MiB/s [2024-12-06T22:05:27.271Z] 34616.20 IOPS, 135.22 MiB/s 00:14:54.399 Latency(us) 00:14:54.399 [2024-12-06T22:05:27.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.399 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:54.399 xnvme_bdev : 5.00 34603.70 135.17 0.00 0.00 1845.00 176.44 13208.02 00:14:54.399 [2024-12-06T22:05:27.271Z] =================================================================================================================== 00:14:54.399 [2024-12-06T22:05:27.271Z] Total : 34603.70 135.17 0.00 0.00 1845.00 176.44 13208.02 00:14:55.337 22:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.338 22:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:55.338 22:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:55.338 22:05:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:55.338 22:05:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.338 { 00:14:55.338 "subsystems": [ 00:14:55.338 { 00:14:55.338 "subsystem": "bdev", 00:14:55.338 "config": [ 00:14:55.338 { 00:14:55.338 "params": { 00:14:55.338 "io_mechanism": "io_uring", 00:14:55.338 "conserve_cpu": true, 00:14:55.338 "filename": "/dev/nvme0n1", 00:14:55.338 "name": "xnvme_bdev" 00:14:55.338 }, 00:14:55.338 "method": "bdev_xnvme_create" 00:14:55.338 }, 00:14:55.338 { 00:14:55.338 "method": "bdev_wait_for_examine" 00:14:55.338 } 00:14:55.338 ] 00:14:55.338 } 00:14:55.338 ] 00:14:55.338 } 00:14:55.338 [2024-12-06 22:05:28.022224] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:14:55.338 [2024-12-06 22:05:28.022391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:14:55.338 [2024-12-06 22:05:28.188667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.597 [2024-12-06 22:05:28.338661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.857 Running I/O for 5 seconds... 00:14:58.178 39538.00 IOPS, 154.45 MiB/s [2024-12-06T22:05:31.992Z] 39036.00 IOPS, 152.48 MiB/s [2024-12-06T22:05:32.933Z] 38647.00 IOPS, 150.96 MiB/s [2024-12-06T22:05:33.878Z] 38527.00 IOPS, 150.50 MiB/s 00:15:01.006 Latency(us) 00:15:01.006 [2024-12-06T22:05:33.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.006 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:01.006 xnvme_bdev : 5.00 38411.27 150.04 0.00 0.00 1661.20 460.01 9225.45 00:15:01.006 [2024-12-06T22:05:33.878Z] =================================================================================================================== 00:15:01.006 [2024-12-06T22:05:33.878Z] Total : 38411.27 150.04 0.00 0.00 1661.20 460.01 9225.45 00:15:01.681 00:15:01.681 real 0m13.184s 00:15:01.681 user 0m6.591s 00:15:01.681 sys 0m5.688s 00:15:01.681 22:05:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.681 22:05:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 ************************************ 00:15:01.681 END TEST xnvme_bdevperf 00:15:01.681 ************************************ 00:15:01.681 22:05:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:01.681 22:05:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.681 22:05:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.681 22:05:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 ************************************ 00:15:01.943 START TEST xnvme_fio_plugin 00:15:01.943 ************************************ 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.943 22:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.943 { 00:15:01.943 "subsystems": [ 00:15:01.943 { 00:15:01.943 "subsystem": "bdev", 00:15:01.943 "config": [ 00:15:01.943 { 00:15:01.943 "params": { 00:15:01.943 "io_mechanism": "io_uring", 00:15:01.943 "conserve_cpu": true, 00:15:01.943 "filename": "/dev/nvme0n1", 00:15:01.943 "name": "xnvme_bdev" 00:15:01.943 }, 00:15:01.943 "method": "bdev_xnvme_create" 00:15:01.943 }, 00:15:01.943 { 00:15:01.943 "method": "bdev_wait_for_examine" 00:15:01.943 } 00:15:01.943 ] 00:15:01.943 } 00:15:01.943 ] 00:15:01.943 } 00:15:01.943 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.943 fio-3.35 00:15:01.943 Starting 1 thread 00:15:08.532 00:15:08.532 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70665: Fri Dec 6 22:05:40 2024 00:15:08.532 read: IOPS=35.3k, BW=138MiB/s (145MB/s)(690MiB/5001msec) 00:15:08.532 slat (nsec): min=2874, max=66532, avg=4053.83, stdev=2138.02 00:15:08.532 clat (usec): min=1060, max=3227, avg=1647.16, stdev=220.28 00:15:08.532 lat (usec): min=1063, max=3256, avg=1651.22, stdev=220.79 00:15:08.532 clat percentiles (usec): 00:15:08.532 | 1.00th=[ 1287], 5.00th=[ 1369], 10.00th=[ 1418], 20.00th=[ 1467], 00:15:08.532 | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[ 1663], 00:15:08.532 | 70.00th=[ 1713], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2073], 00:15:08.532 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2769], 99.95th=[ 2868], 00:15:08.532 | 99.99th=[ 3064] 00:15:08.532 bw ( KiB/s): min=138752, max=145920, per=100.00%, avg=141880.89, stdev=2487.87, samples=9 00:15:08.532 iops : min=34688, max=36480, avg=35470.22, stdev=621.97, samples=9 00:15:08.532 lat (msec) : 2=92.63%, 4=7.37% 00:15:08.532 cpu : usr=36.80%, sys=58.52%, ctx=9, majf=0, minf=762 00:15:08.532 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:08.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.532 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:08.532 issued rwts: total=176512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.532 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.532 00:15:08.532 Run status group 0 (all jobs): 00:15:08.532 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=690MiB (723MB), run=5001-5001msec 00:15:08.793 ----------------------------------------------------- 00:15:08.793 Suppressions used: 00:15:08.793 count bytes template 00:15:08.793 1 11 /usr/src/fio/parse.c 00:15:08.793 1 8 libtcmalloc_minimal.so 00:15:08.793 1 904 libcrypto.so 00:15:08.793 ----------------------------------------------------- 00:15:08.793 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.793 22:05:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.793 { 00:15:08.793 "subsystems": [ 00:15:08.793 { 00:15:08.793 "subsystem": "bdev", 00:15:08.793 "config": [ 00:15:08.793 { 00:15:08.793 "params": { 00:15:08.793 "io_mechanism": "io_uring", 00:15:08.793 "conserve_cpu": true, 00:15:08.793 "filename": "/dev/nvme0n1", 00:15:08.793 "name": "xnvme_bdev" 00:15:08.793 }, 00:15:08.793 "method": "bdev_xnvme_create" 00:15:08.793 }, 00:15:08.793 { 00:15:08.793 "method": "bdev_wait_for_examine" 00:15:08.793 } 00:15:08.793 ] 00:15:08.793 } 00:15:08.793 ] 00:15:08.793 } 00:15:09.054 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:09.054 fio-3.35 00:15:09.054 Starting 1 thread 00:15:15.642 00:15:15.642 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70762: Fri Dec 6 22:05:47 2024 00:15:15.642 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(701MiB/5001msec); 0 zone resets 00:15:15.642 slat (nsec): min=2910, max=69917, avg=4310.36, stdev=2181.77 00:15:15.642 clat (usec): min=1038, max=4828, avg=1608.52, stdev=234.20 00:15:15.642 lat (usec): min=1042, max=4832, avg=1612.83, stdev=234.64 00:15:15.642 clat percentiles (usec): 00:15:15.642 | 1.00th=[ 1221], 5.00th=[ 1303], 10.00th=[ 1352], 20.00th=[ 1418], 00:15:15.642 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1631], 00:15:15.642 | 70.00th=[ 1696], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2057], 00:15:15.642 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2769], 99.95th=[ 2999], 00:15:15.642 | 99.99th=[ 3654] 00:15:15.642 bw ( KiB/s): min=133048, max=150528, per=100.00%, avg=144137.33, stdev=4934.29, samples=9 00:15:15.642 iops : min=33262, max=37632, avg=36034.33, stdev=1233.57, samples=9 00:15:15.642 lat (msec) : 2=93.48%, 4=6.51%, 10=0.01% 00:15:15.642 cpu : usr=39.80%, sys=55.62%, ctx=10, majf=0, minf=763 00:15:15.643 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:15.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.643 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:15.643 issued rwts: total=0,179515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.643 00:15:15.643 Run status group 0 (all jobs): 00:15:15.643 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=701MiB (735MB), run=5001-5001msec 00:15:15.905 ----------------------------------------------------- 00:15:15.905 Suppressions used: 00:15:15.905 count bytes template 00:15:15.905 1 11 /usr/src/fio/parse.c 00:15:15.905 1 8 libtcmalloc_minimal.so 00:15:15.905 1 904 libcrypto.so 00:15:15.905 ----------------------------------------------------- 00:15:15.905 00:15:15.905 00:15:15.905 real 0m13.998s 00:15:15.905 user 0m6.834s 00:15:15.905 sys 0m6.370s 00:15:15.905 22:05:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.905 ************************************ 00:15:15.905 END TEST xnvme_fio_plugin 00:15:15.905 ************************************ 00:15:15.905 22:05:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:15.905 22:05:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:15.905 22:05:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.905 22:05:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.905 22:05:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.905 ************************************ 00:15:15.905 START TEST xnvme_rpc 00:15:15.905 ************************************ 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70848 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70848 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70848 ']' 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.905 22:05:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.905 [2024-12-06 22:05:48.735777] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:15.905 [2024-12-06 22:05:48.735947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:15:16.165 [2024-12-06 22:05:48.905981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.165 [2024-12-06 22:05:49.033103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 xnvme_bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70848 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70848 ']' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70848 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70848 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.107 killing process with pid 70848 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70848' 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70848 00:15:17.107 22:05:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70848 00:15:19.049 00:15:19.049 real 0m2.974s 00:15:19.049 user 0m2.977s 00:15:19.049 sys 0m0.496s 00:15:19.049 ************************************ 00:15:19.049 END TEST xnvme_rpc 00:15:19.049 ************************************ 00:15:19.049 22:05:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.049 22:05:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.049 22:05:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:19.049 22:05:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:19.049 22:05:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.049 22:05:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.049 ************************************ 00:15:19.049 START TEST xnvme_bdevperf 00:15:19.049 ************************************ 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.049 22:05:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:19.049 { 00:15:19.049 "subsystems": [ 00:15:19.049 { 00:15:19.049 "subsystem": "bdev", 00:15:19.049 "config": [ 00:15:19.049 { 00:15:19.049 "params": { 00:15:19.049 "io_mechanism": "io_uring_cmd", 00:15:19.049 "conserve_cpu": false, 00:15:19.049 "filename": "/dev/ng0n1", 00:15:19.049 "name": "xnvme_bdev" 00:15:19.049 }, 00:15:19.049 "method": "bdev_xnvme_create" 00:15:19.049 }, 00:15:19.049 { 00:15:19.049 "method": "bdev_wait_for_examine" 00:15:19.049 } 00:15:19.049 ] 00:15:19.049 } 00:15:19.049 ] 00:15:19.049 } 00:15:19.049 [2024-12-06 22:05:51.752120] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:19.049 [2024-12-06 22:05:51.752315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70922 ] 00:15:19.310 [2024-12-06 22:05:51.920776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.310 [2024-12-06 22:05:52.042998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.571 Running I/O for 5 seconds... 00:15:21.903 38080.00 IOPS, 148.75 MiB/s [2024-12-06T22:05:55.347Z] 37024.00 IOPS, 144.62 MiB/s [2024-12-06T22:05:56.732Z] 36885.33 IOPS, 144.08 MiB/s [2024-12-06T22:05:57.679Z] 37232.00 IOPS, 145.44 MiB/s [2024-12-06T22:05:57.679Z] 37248.00 IOPS, 145.50 MiB/s 00:15:24.807 Latency(us) 00:15:24.807 [2024-12-06T22:05:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.807 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:24.807 xnvme_bdev : 5.01 37215.28 145.37 0.00 0.00 1715.32 1033.45 4411.08 00:15:24.807 [2024-12-06T22:05:57.679Z] =================================================================================================================== 00:15:24.807 [2024-12-06T22:05:57.679Z] Total : 37215.28 145.37 0.00 0.00 1715.32 1033.45 4411.08 00:15:25.381 22:05:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.381 22:05:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:25.381 22:05:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:25.381 22:05:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.381 22:05:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.381 { 00:15:25.381 "subsystems": [ 00:15:25.381 { 00:15:25.381 "subsystem": "bdev", 00:15:25.381 "config": [ 00:15:25.381 { 00:15:25.381 "params": { 00:15:25.381 "io_mechanism": "io_uring_cmd", 00:15:25.381 "conserve_cpu": false, 00:15:25.381 "filename": "/dev/ng0n1", 00:15:25.381 "name": "xnvme_bdev" 00:15:25.381 }, 00:15:25.381 "method": "bdev_xnvme_create" 00:15:25.381 }, 00:15:25.381 { 00:15:25.381 "method": "bdev_wait_for_examine" 00:15:25.381 } 00:15:25.381 ] 00:15:25.381 } 00:15:25.381 ] 00:15:25.381 } 00:15:25.381 [2024-12-06 22:05:58.231269] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:25.381 [2024-12-06 22:05:58.231433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:15:25.642 [2024-12-06 22:05:58.398055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.905 [2024-12-06 22:05:58.521395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.168 Running I/O for 5 seconds... 00:15:28.055 11679.00 IOPS, 45.62 MiB/s [2024-12-06T22:06:01.870Z] 11855.00 IOPS, 46.31 MiB/s [2024-12-06T22:06:03.260Z] 11992.67 IOPS, 46.85 MiB/s [2024-12-06T22:06:03.832Z] 12151.00 IOPS, 47.46 MiB/s [2024-12-06T22:06:03.833Z] 12191.80 IOPS, 47.62 MiB/s 00:15:30.961 Latency(us) 00:15:30.961 [2024-12-06T22:06:03.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.961 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:30.961 xnvme_bdev : 5.01 12181.45 47.58 0.00 0.00 5243.94 64.59 26214.40 00:15:30.961 [2024-12-06T22:06:03.833Z] =================================================================================================================== 00:15:30.961 [2024-12-06T22:06:03.833Z] Total : 12181.45 47.58 0.00 0.00 5243.94 64.59 26214.40 00:15:31.904 22:06:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.904 22:06:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:31.904 22:06:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:31.904 22:06:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:31.904 22:06:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.904 { 00:15:31.904 "subsystems": [ 00:15:31.904 { 00:15:31.904 "subsystem": "bdev", 00:15:31.904 "config": [ 00:15:31.904 { 00:15:31.904 "params": { 00:15:31.904 "io_mechanism": "io_uring_cmd", 00:15:31.904 "conserve_cpu": false, 00:15:31.904 "filename": "/dev/ng0n1", 00:15:31.904 "name": "xnvme_bdev" 00:15:31.904 }, 00:15:31.904 "method": "bdev_xnvme_create" 00:15:31.904 }, 00:15:31.904 { 00:15:31.904 "method": "bdev_wait_for_examine" 00:15:31.904 } 00:15:31.904 ] 00:15:31.904 } 00:15:31.904 ] 00:15:31.904 } 00:15:32.163 [2024-12-06 22:06:04.781883] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:32.163 [2024-12-06 22:06:04.782027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:15:32.163 [2024-12-06 22:06:04.946970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.429 [2024-12-06 22:06:05.091271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.688 Running I/O for 5 seconds... 00:15:34.665 72000.00 IOPS, 281.25 MiB/s [2024-12-06T22:06:08.481Z] 77152.00 IOPS, 301.38 MiB/s [2024-12-06T22:06:09.420Z] 79850.67 IOPS, 311.92 MiB/s [2024-12-06T22:06:10.790Z] 79744.00 IOPS, 311.50 MiB/s [2024-12-06T22:06:10.790Z] 82265.60 IOPS, 321.35 MiB/s 00:15:37.918 Latency(us) 00:15:37.918 [2024-12-06T22:06:10.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.918 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:37.918 xnvme_bdev : 5.00 82224.74 321.19 0.00 0.00 774.94 475.77 2634.04 00:15:37.918 [2024-12-06T22:06:10.790Z] =================================================================================================================== 00:15:37.918 [2024-12-06T22:06:10.790Z] Total : 82224.74 321.19 0.00 0.00 774.94 475.77 2634.04 00:15:38.177 22:06:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:38.177 22:06:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:38.177 22:06:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:38.177 22:06:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:38.177 22:06:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:38.177 { 00:15:38.177 "subsystems": [ 00:15:38.177 { 00:15:38.177 "subsystem": "bdev", 00:15:38.177 "config": [ 00:15:38.177 { 00:15:38.177 "params": { 00:15:38.177 "io_mechanism": "io_uring_cmd", 00:15:38.177 "conserve_cpu": false, 00:15:38.177 "filename": "/dev/ng0n1", 00:15:38.177 "name": "xnvme_bdev" 00:15:38.177 }, 00:15:38.177 "method": "bdev_xnvme_create" 00:15:38.177 }, 00:15:38.177 { 00:15:38.177 "method": "bdev_wait_for_examine" 00:15:38.177 } 00:15:38.177 ] 00:15:38.177 } 00:15:38.177 ] 00:15:38.177 } 00:15:38.177 [2024-12-06 22:06:11.034087] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:38.177 [2024-12-06 22:06:11.034220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71147 ] 00:15:38.435 [2024-12-06 22:06:11.189855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.435 [2024-12-06 22:06:11.279306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.693 Running I/O for 5 seconds... 00:15:40.640 608.00 IOPS, 2.38 MiB/s [2024-12-06T22:06:14.892Z] 533.00 IOPS, 2.08 MiB/s [2024-12-06T22:06:15.832Z] 832.33 IOPS, 3.25 MiB/s [2024-12-06T22:06:16.773Z] 690.50 IOPS, 2.70 MiB/s [2024-12-06T22:06:17.034Z] 642.00 IOPS, 2.51 MiB/s 00:15:44.162 Latency(us) 00:15:44.162 [2024-12-06T22:06:17.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.162 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:44.162 xnvme_bdev : 5.31 616.63 2.41 0.00 0.00 101122.89 99.25 609787.27 00:15:44.162 [2024-12-06T22:06:17.034Z] =================================================================================================================== 00:15:44.162 [2024-12-06T22:06:17.034Z] Total : 616.63 2.41 0.00 0.00 101122.89 99.25 609787.27 00:15:44.733 00:15:44.733 real 0m25.697s 00:15:44.733 user 0m14.343s 00:15:44.733 sys 0m10.884s 00:15:44.733 22:06:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.733 ************************************ 00:15:44.733 22:06:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 END TEST xnvme_bdevperf 00:15:44.733 ************************************ 00:15:44.733 22:06:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:44.733 22:06:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.733 22:06:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.733 22:06:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 ************************************ 00:15:44.733 START TEST xnvme_fio_plugin 00:15:44.733 ************************************ 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.733 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:44.734 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.734 22:06:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.734 { 00:15:44.734 "subsystems": [ 00:15:44.734 { 00:15:44.734 "subsystem": "bdev", 00:15:44.734 "config": [ 00:15:44.734 { 00:15:44.734 "params": { 00:15:44.734 "io_mechanism": "io_uring_cmd", 00:15:44.734 "conserve_cpu": false, 00:15:44.734 "filename": "/dev/ng0n1", 00:15:44.734 "name": "xnvme_bdev" 00:15:44.734 }, 00:15:44.734 "method": "bdev_xnvme_create" 00:15:44.734 }, 00:15:44.734 { 00:15:44.734 "method": "bdev_wait_for_examine" 00:15:44.734 } 00:15:44.734 ] 00:15:44.734 } 00:15:44.734 ] 00:15:44.734 } 00:15:44.994 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.994 fio-3.35 00:15:44.994 Starting 1 thread 00:15:50.288 00:15:50.288 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71268: Fri Dec 6 22:06:23 2024 00:15:50.288 read: IOPS=49.5k, BW=193MiB/s (203MB/s)(966MiB/5001msec) 00:15:50.288 slat (nsec): min=2886, max=80057, avg=3739.68, stdev=1456.32 00:15:50.288 clat (usec): min=179, max=6070, avg=1148.33, stdev=266.38 00:15:50.288 lat (usec): min=184, max=6073, avg=1152.07, stdev=266.61 00:15:50.288 clat percentiles (usec): 00:15:50.288 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 922], 00:15:50.288 | 30.00th=[ 979], 40.00th=[ 1045], 50.00th=[ 1106], 60.00th=[ 1172], 00:15:50.288 | 70.00th=[ 1254], 80.00th=[ 1352], 90.00th=[ 1500], 95.00th=[ 1614], 00:15:50.288 | 99.00th=[ 1926], 99.50th=[ 2073], 99.90th=[ 2507], 99.95th=[ 2704], 00:15:50.288 | 99.99th=[ 3949] 00:15:50.288 bw ( KiB/s): min=176128, max=225280, per=100.00%, avg=197905.78, stdev=19175.31, samples=9 00:15:50.288 iops : min=44032, max=56320, avg=49476.44, stdev=4793.83, samples=9 00:15:50.288 lat (usec) : 250=0.01%, 500=0.01%, 750=2.11%, 1000=30.78% 00:15:50.288 lat (msec) : 2=66.40%, 4=0.69%, 10=0.01% 00:15:50.288 cpu : usr=36.54%, sys=62.50%, ctx=16, majf=0, minf=762 00:15:50.288 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:50.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.288 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:50.288 issued rwts: total=247325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.288 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:50.288 00:15:50.288 Run status group 0 (all jobs): 00:15:50.288 READ: bw=193MiB/s (203MB/s), 193MiB/s-193MiB/s (203MB/s-203MB/s), io=966MiB (1013MB), run=5001-5001msec 00:15:51.688 ----------------------------------------------------- 00:15:51.688 Suppressions used: 00:15:51.688 count bytes template 00:15:51.688 1 11 /usr/src/fio/parse.c 00:15:51.688 1 8 libtcmalloc_minimal.so 00:15:51.688 1 904 libcrypto.so 00:15:51.688 ----------------------------------------------------- 00:15:51.688 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.688 22:06:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.688 { 00:15:51.688 "subsystems": [ 00:15:51.688 { 00:15:51.688 "subsystem": "bdev", 00:15:51.688 "config": [ 00:15:51.688 { 00:15:51.688 "params": { 00:15:51.688 "io_mechanism": "io_uring_cmd", 00:15:51.688 "conserve_cpu": false, 00:15:51.688 "filename": "/dev/ng0n1", 00:15:51.688 "name": "xnvme_bdev" 00:15:51.688 }, 00:15:51.688 "method": "bdev_xnvme_create" 00:15:51.688 }, 00:15:51.688 { 00:15:51.688 "method": "bdev_wait_for_examine" 00:15:51.688 } 00:15:51.688 ] 00:15:51.688 } 00:15:51.688 ] 00:15:51.688 } 00:15:51.688 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:51.688 fio-3.35 00:15:51.688 Starting 1 thread 00:15:58.272 00:15:58.272 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71354: Fri Dec 6 22:06:30 2024 00:15:58.272 write: IOPS=34.4k, BW=134MiB/s (141MB/s)(672MiB/5001msec); 0 zone resets 00:15:58.272 slat (usec): min=2, max=121, avg= 4.05, stdev= 1.96 00:15:58.272 clat (usec): min=69, max=18824, avg=1728.64, stdev=1412.94 00:15:58.272 lat (usec): min=73, max=18828, avg=1732.69, stdev=1412.99 00:15:58.272 clat percentiles (usec): 00:15:58.272 | 1.00th=[ 429], 5.00th=[ 750], 10.00th=[ 938], 20.00th=[ 1123], 00:15:58.272 | 30.00th=[ 1254], 40.00th=[ 1336], 50.00th=[ 1434], 60.00th=[ 1516], 00:15:58.272 | 70.00th=[ 1631], 80.00th=[ 1762], 90.00th=[ 2180], 95.00th=[ 4359], 00:15:58.272 | 99.00th=[ 8848], 99.50th=[ 9765], 99.90th=[11600], 99.95th=[13173], 00:15:58.272 | 99.99th=[17171] 00:15:58.272 bw ( KiB/s): min=128048, max=150152, per=99.98%, avg=137487.22, stdev=6697.51, samples=9 00:15:58.272 iops : min=32012, max=37538, avg=34371.78, stdev=1674.35, samples=9 00:15:58.272 lat (usec) : 100=0.01%, 250=0.23%, 500=1.25%, 750=3.51%, 1000=7.65% 00:15:58.272 lat (msec) : 2=75.04%, 4=6.93%, 10=4.94%, 20=0.43% 00:15:58.272 cpu : usr=36.92%, sys=61.88%, ctx=17, majf=0, minf=763 00:15:58.272 IO depths : 1=0.9%, 2=1.9%, 4=4.0%, 8=8.8%, 16=21.1%, 32=60.3%, >=64=2.9% 00:15:58.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.272 complete : 0=0.0%, 4=97.8%, 8=0.2%, 16=0.2%, 32=0.4%, 64=1.4%, >=64=0.0% 00:15:58.272 issued rwts: total=0,171932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:58.272 00:15:58.272 Run status group 0 (all jobs): 00:15:58.273 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=672MiB (704MB), run=5001-5001msec 00:15:58.273 ----------------------------------------------------- 00:15:58.273 Suppressions used: 00:15:58.273 count bytes template 00:15:58.273 1 11 /usr/src/fio/parse.c 00:15:58.273 1 8 libtcmalloc_minimal.so 00:15:58.273 1 904 libcrypto.so 00:15:58.273 ----------------------------------------------------- 00:15:58.273 00:15:58.273 00:15:58.273 real 0m13.699s 00:15:58.273 user 0m6.504s 00:15:58.273 sys 0m6.755s 00:15:58.273 22:06:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.273 ************************************ 00:15:58.273 END TEST xnvme_fio_plugin 00:15:58.273 ************************************ 00:15:58.273 22:06:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 22:06:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:58.534 22:06:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:58.534 22:06:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:58.534 22:06:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:58.534 22:06:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.534 22:06:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.534 22:06:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 ************************************ 00:15:58.534 START TEST xnvme_rpc 00:15:58.534 ************************************ 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71434 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71434 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71434 ']' 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.534 22:06:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:58.534 [2024-12-06 22:06:31.296151] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:15:58.534 [2024-12-06 22:06:31.296340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71434 ] 00:15:58.796 [2024-12-06 22:06:31.466533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.796 [2024-12-06 22:06:31.621560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.775 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.775 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:59.775 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 xnvme_bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71434 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71434 ']' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71434 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71434 00:15:59.776 killing process with pid 71434 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71434' 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71434 00:15:59.776 22:06:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71434 00:16:01.693 00:16:01.693 real 0m3.237s 00:16:01.693 user 0m3.123s 00:16:01.693 sys 0m0.595s 00:16:01.693 22:06:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.693 ************************************ 00:16:01.693 END TEST xnvme_rpc 00:16:01.693 ************************************ 00:16:01.693 22:06:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.693 22:06:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:01.693 22:06:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:01.693 22:06:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.693 22:06:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.693 ************************************ 00:16:01.693 START TEST xnvme_bdevperf 00:16:01.693 ************************************ 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:01.693 22:06:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:01.693 { 00:16:01.693 "subsystems": [ 00:16:01.693 { 00:16:01.693 "subsystem": "bdev", 00:16:01.693 "config": [ 00:16:01.693 { 00:16:01.693 "params": { 00:16:01.693 "io_mechanism": "io_uring_cmd", 00:16:01.693 "conserve_cpu": true, 00:16:01.693 "filename": "/dev/ng0n1", 00:16:01.693 "name": "xnvme_bdev" 00:16:01.693 }, 00:16:01.693 "method": "bdev_xnvme_create" 00:16:01.693 }, 00:16:01.693 { 00:16:01.693 "method": "bdev_wait_for_examine" 00:16:01.693 } 00:16:01.693 ] 00:16:01.693 } 00:16:01.693 ] 00:16:01.693 } 00:16:01.955 [2024-12-06 22:06:34.584954] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:01.955 [2024-12-06 22:06:34.585345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71514 ] 00:16:01.955 [2024-12-06 22:06:34.751441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.216 [2024-12-06 22:06:34.910753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.477 Running I/O for 5 seconds... 00:16:04.816 37376.00 IOPS, 146.00 MiB/s [2024-12-06T22:06:38.259Z] 36992.00 IOPS, 144.50 MiB/s [2024-12-06T22:06:39.645Z] 36753.00 IOPS, 143.57 MiB/s [2024-12-06T22:06:40.590Z] 37996.75 IOPS, 148.42 MiB/s 00:16:07.718 Latency(us) 00:16:07.718 [2024-12-06T22:06:40.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.718 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:07.718 xnvme_bdev : 5.01 38511.97 150.44 0.00 0.00 1657.52 888.52 6755.25 00:16:07.718 [2024-12-06T22:06:40.590Z] =================================================================================================================== 00:16:07.718 [2024-12-06T22:06:40.590Z] Total : 38511.97 150.44 0.00 0.00 1657.52 888.52 6755.25 00:16:08.291 22:06:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:08.291 22:06:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:08.291 22:06:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:08.291 22:06:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:08.291 22:06:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.291 { 00:16:08.291 "subsystems": [ 00:16:08.291 { 00:16:08.291 "subsystem": "bdev", 00:16:08.291 "config": [ 00:16:08.291 { 00:16:08.291 "params": { 00:16:08.291 "io_mechanism": "io_uring_cmd", 00:16:08.291 "conserve_cpu": true, 00:16:08.291 "filename": "/dev/ng0n1", 00:16:08.291 "name": "xnvme_bdev" 00:16:08.291 }, 00:16:08.291 "method": "bdev_xnvme_create" 00:16:08.291 }, 00:16:08.291 { 00:16:08.291 "method": "bdev_wait_for_examine" 00:16:08.291 } 00:16:08.291 ] 00:16:08.291 } 00:16:08.291 ] 00:16:08.291 } 00:16:08.291 [2024-12-06 22:06:41.133958] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:08.291 [2024-12-06 22:06:41.134359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71588 ] 00:16:08.553 [2024-12-06 22:06:41.298612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.553 [2024-12-06 22:06:41.422052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.141 Running I/O for 5 seconds... 00:16:11.028 39020.00 IOPS, 152.42 MiB/s [2024-12-06T22:06:44.877Z] 39436.00 IOPS, 154.05 MiB/s [2024-12-06T22:06:45.816Z] 41004.67 IOPS, 160.17 MiB/s [2024-12-06T22:06:46.756Z] 40312.75 IOPS, 157.47 MiB/s 00:16:13.884 Latency(us) 00:16:13.884 [2024-12-06T22:06:46.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.884 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:13.884 xnvme_bdev : 5.00 39746.39 155.26 0.00 0.00 1605.72 690.02 7662.67 00:16:13.884 [2024-12-06T22:06:46.756Z] =================================================================================================================== 00:16:13.884 [2024-12-06T22:06:46.756Z] Total : 39746.39 155.26 0.00 0.00 1605.72 690.02 7662.67 00:16:14.825 22:06:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:14.825 22:06:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:14.825 22:06:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:14.825 22:06:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:14.825 22:06:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:14.825 { 00:16:14.825 "subsystems": [ 00:16:14.825 { 00:16:14.825 "subsystem": "bdev", 00:16:14.825 "config": [ 00:16:14.825 { 00:16:14.825 "params": { 00:16:14.825 "io_mechanism": "io_uring_cmd", 00:16:14.825 "conserve_cpu": true, 00:16:14.825 "filename": "/dev/ng0n1", 00:16:14.825 "name": "xnvme_bdev" 00:16:14.825 }, 00:16:14.825 "method": "bdev_xnvme_create" 00:16:14.825 }, 00:16:14.825 { 00:16:14.825 "method": "bdev_wait_for_examine" 00:16:14.825 } 00:16:14.825 ] 00:16:14.825 } 00:16:14.825 ] 00:16:14.825 } 00:16:14.825 [2024-12-06 22:06:47.602033] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:14.825 [2024-12-06 22:06:47.602593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71662 ] 00:16:15.086 [2024-12-06 22:06:47.771218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.086 [2024-12-06 22:06:47.904816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.346 Running I/O for 5 seconds... 00:16:17.656 81792.00 IOPS, 319.50 MiB/s [2024-12-06T22:06:51.464Z] 82848.00 IOPS, 323.62 MiB/s [2024-12-06T22:06:52.398Z] 84693.33 IOPS, 330.83 MiB/s [2024-12-06T22:06:53.336Z] 85536.00 IOPS, 334.12 MiB/s 00:16:20.464 Latency(us) 00:16:20.464 [2024-12-06T22:06:53.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.465 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:20.465 xnvme_bdev : 5.00 85094.23 332.40 0.00 0.00 748.66 437.96 2860.90 00:16:20.465 [2024-12-06T22:06:53.337Z] =================================================================================================================== 00:16:20.465 [2024-12-06T22:06:53.337Z] Total : 85094.23 332.40 0.00 0.00 748.66 437.96 2860.90 00:16:21.033 22:06:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:21.033 22:06:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:21.033 22:06:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:21.033 22:06:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:21.033 22:06:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:21.033 { 00:16:21.033 "subsystems": [ 00:16:21.033 { 00:16:21.033 "subsystem": "bdev", 00:16:21.033 "config": [ 00:16:21.033 { 00:16:21.033 "params": { 00:16:21.033 "io_mechanism": "io_uring_cmd", 00:16:21.033 "conserve_cpu": true, 00:16:21.033 "filename": "/dev/ng0n1", 00:16:21.033 "name": "xnvme_bdev" 00:16:21.033 }, 00:16:21.033 "method": "bdev_xnvme_create" 00:16:21.033 }, 00:16:21.033 { 00:16:21.033 "method": "bdev_wait_for_examine" 00:16:21.033 } 00:16:21.033 ] 00:16:21.033 } 00:16:21.033 ] 00:16:21.033 } 00:16:21.033 [2024-12-06 22:06:53.822213] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:21.033 [2024-12-06 22:06:53.822341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71737 ] 00:16:21.292 [2024-12-06 22:06:53.979778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.292 [2024-12-06 22:06:54.054249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.550 Running I/O for 5 seconds... 00:16:23.412 50618.00 IOPS, 197.73 MiB/s [2024-12-06T22:06:57.658Z] 39968.00 IOPS, 156.12 MiB/s [2024-12-06T22:06:58.592Z] 35830.00 IOPS, 139.96 MiB/s [2024-12-06T22:06:59.526Z] 33711.25 IOPS, 131.68 MiB/s [2024-12-06T22:06:59.526Z] 33033.60 IOPS, 129.04 MiB/s 00:16:26.654 Latency(us) 00:16:26.654 [2024-12-06T22:06:59.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.654 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:26.654 xnvme_bdev : 5.00 33019.02 128.98 0.00 0.00 1932.95 49.43 19358.33 00:16:26.654 [2024-12-06T22:06:59.526Z] =================================================================================================================== 00:16:26.654 [2024-12-06T22:06:59.526Z] Total : 33019.02 128.98 0.00 0.00 1932.95 49.43 19358.33 00:16:27.221 00:16:27.221 real 0m25.449s 00:16:27.221 user 0m16.410s 00:16:27.221 sys 0m7.182s 00:16:27.221 ************************************ 00:16:27.221 END TEST xnvme_bdevperf 00:16:27.221 ************************************ 00:16:27.221 22:06:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.221 22:06:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:27.221 22:07:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:27.221 22:07:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.221 22:07:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.221 22:07:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.221 ************************************ 00:16:27.221 START TEST xnvme_fio_plugin 00:16:27.221 ************************************ 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:27.221 22:07:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:27.221 { 00:16:27.221 "subsystems": [ 00:16:27.221 { 00:16:27.221 "subsystem": "bdev", 00:16:27.221 "config": [ 00:16:27.221 { 00:16:27.221 "params": { 00:16:27.221 "io_mechanism": "io_uring_cmd", 00:16:27.221 "conserve_cpu": true, 00:16:27.221 "filename": "/dev/ng0n1", 00:16:27.221 "name": "xnvme_bdev" 00:16:27.221 }, 00:16:27.221 "method": "bdev_xnvme_create" 00:16:27.221 }, 00:16:27.221 { 00:16:27.221 "method": "bdev_wait_for_examine" 00:16:27.221 } 00:16:27.221 ] 00:16:27.221 } 00:16:27.221 ] 00:16:27.221 } 00:16:27.481 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:27.481 fio-3.35 00:16:27.481 Starting 1 thread 00:16:34.159 00:16:34.159 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71850: Fri Dec 6 22:07:05 2024 00:16:34.159 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(862MiB/5001msec) 00:16:34.159 slat (nsec): min=2870, max=55400, avg=3463.98, stdev=1475.65 00:16:34.159 clat (usec): min=644, max=3775, avg=1313.42, stdev=233.39 00:16:34.159 lat (usec): min=647, max=3809, avg=1316.89, stdev=233.67 00:16:34.159 clat percentiles (usec): 00:16:34.159 | 1.00th=[ 873], 5.00th=[ 988], 10.00th=[ 1057], 20.00th=[ 1123], 00:16:34.159 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1287], 60.00th=[ 1352], 00:16:34.159 | 70.00th=[ 1418], 80.00th=[ 1500], 90.00th=[ 1614], 95.00th=[ 1729], 00:16:34.159 | 99.00th=[ 1975], 99.50th=[ 2057], 99.90th=[ 2245], 99.95th=[ 2311], 00:16:34.159 | 99.99th=[ 3523] 00:16:34.159 bw ( KiB/s): min=157696, max=190464, per=99.20%, avg=175047.11, stdev=13720.37, samples=9 00:16:34.159 iops : min=39424, max=47616, avg=43761.78, stdev=3430.09, samples=9 00:16:34.159 lat (usec) : 750=0.07%, 1000=5.54% 00:16:34.159 lat (msec) : 2=93.59%, 4=0.80% 00:16:34.159 cpu : usr=60.06%, sys=36.94%, ctx=17, majf=0, minf=762 00:16:34.159 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:34.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.159 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:34.159 issued rwts: total=220608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.159 00:16:34.159 Run status group 0 (all jobs): 00:16:34.159 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=862MiB (904MB), run=5001-5001msec 00:16:34.159 ----------------------------------------------------- 00:16:34.159 Suppressions used: 00:16:34.159 count bytes template 00:16:34.159 1 11 /usr/src/fio/parse.c 00:16:34.159 1 8 libtcmalloc_minimal.so 00:16:34.159 1 904 libcrypto.so 00:16:34.159 ----------------------------------------------------- 00:16:34.159 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.159 22:07:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.159 { 00:16:34.159 "subsystems": [ 00:16:34.159 { 00:16:34.159 "subsystem": "bdev", 00:16:34.159 "config": [ 00:16:34.159 { 00:16:34.159 "params": { 00:16:34.159 "io_mechanism": "io_uring_cmd", 00:16:34.159 "conserve_cpu": true, 00:16:34.159 "filename": "/dev/ng0n1", 00:16:34.159 "name": "xnvme_bdev" 00:16:34.159 }, 00:16:34.159 "method": "bdev_xnvme_create" 00:16:34.159 }, 00:16:34.159 { 00:16:34.159 "method": "bdev_wait_for_examine" 00:16:34.159 } 00:16:34.159 ] 00:16:34.159 } 00:16:34.159 ] 00:16:34.159 } 00:16:34.420 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:34.420 fio-3.35 00:16:34.420 Starting 1 thread 00:16:41.007 00:16:41.007 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71941: Fri Dec 6 22:07:12 2024 00:16:41.007 write: IOPS=42.0k, BW=164MiB/s (172MB/s)(821MiB/5002msec); 0 zone resets 00:16:41.007 slat (usec): min=2, max=299, avg= 4.11, stdev= 2.29 00:16:41.007 clat (usec): min=553, max=4509, avg=1361.42, stdev=246.62 00:16:41.007 lat (usec): min=558, max=4519, avg=1365.53, stdev=247.29 00:16:41.007 clat percentiles (usec): 00:16:41.007 | 1.00th=[ 988], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1156], 00:16:41.007 | 30.00th=[ 1205], 40.00th=[ 1254], 50.00th=[ 1319], 60.00th=[ 1385], 00:16:41.007 | 70.00th=[ 1450], 80.00th=[ 1549], 90.00th=[ 1680], 95.00th=[ 1811], 00:16:41.007 | 99.00th=[ 2114], 99.50th=[ 2245], 99.90th=[ 2769], 99.95th=[ 3064], 00:16:41.007 | 99.99th=[ 3654] 00:16:41.007 bw ( KiB/s): min=152360, max=182360, per=100.00%, avg=168693.33, stdev=11242.00, samples=9 00:16:41.007 iops : min=38090, max=45590, avg=42173.33, stdev=2810.50, samples=9 00:16:41.007 lat (usec) : 750=0.01%, 1000=1.54% 00:16:41.007 lat (msec) : 2=96.63%, 4=1.82%, 10=0.01% 00:16:41.007 cpu : usr=58.81%, sys=36.09%, ctx=11, majf=0, minf=763 00:16:41.007 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:16:41.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.007 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:41.007 issued rwts: total=0,210184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.007 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.007 00:16:41.007 Run status group 0 (all jobs): 00:16:41.007 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=821MiB (861MB), run=5002-5002msec 00:16:41.007 ----------------------------------------------------- 00:16:41.007 Suppressions used: 00:16:41.007 count bytes template 00:16:41.007 1 11 /usr/src/fio/parse.c 00:16:41.007 1 8 libtcmalloc_minimal.so 00:16:41.007 1 904 libcrypto.so 00:16:41.007 ----------------------------------------------------- 00:16:41.007 00:16:41.007 ************************************ 00:16:41.007 END TEST xnvme_fio_plugin 00:16:41.007 ************************************ 00:16:41.007 00:16:41.007 real 0m13.780s 00:16:41.007 user 0m8.768s 00:16:41.007 sys 0m4.280s 00:16:41.007 22:07:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.007 22:07:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:41.007 22:07:13 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71434 00:16:41.007 22:07:13 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71434 ']' 00:16:41.007 Process with pid 71434 is not found 00:16:41.007 22:07:13 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71434 00:16:41.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71434) - No such process 00:16:41.007 22:07:13 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71434 is not found' 00:16:41.007 00:16:41.007 real 3m33.262s 00:16:41.007 user 2m0.278s 00:16:41.007 sys 1m18.768s 00:16:41.007 22:07:13 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.007 22:07:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.007 ************************************ 00:16:41.007 END TEST nvme_xnvme 00:16:41.007 ************************************ 00:16:41.273 22:07:13 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:41.273 22:07:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.273 22:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.273 22:07:13 -- common/autotest_common.sh@10 -- # set +x 00:16:41.273 ************************************ 00:16:41.273 START TEST blockdev_xnvme 00:16:41.273 ************************************ 00:16:41.273 22:07:13 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:41.273 * Looking for test storage... 00:16:41.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:41.273 22:07:13 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:41.273 22:07:13 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:41.273 22:07:13 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.273 22:07:14 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:41.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.273 --rc genhtml_branch_coverage=1 00:16:41.273 --rc genhtml_function_coverage=1 00:16:41.273 --rc genhtml_legend=1 00:16:41.273 --rc geninfo_all_blocks=1 00:16:41.273 --rc geninfo_unexecuted_blocks=1 00:16:41.273 00:16:41.273 ' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:41.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.273 --rc genhtml_branch_coverage=1 00:16:41.273 --rc genhtml_function_coverage=1 00:16:41.273 --rc genhtml_legend=1 00:16:41.273 --rc geninfo_all_blocks=1 00:16:41.273 --rc geninfo_unexecuted_blocks=1 00:16:41.273 00:16:41.273 ' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:41.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.273 --rc genhtml_branch_coverage=1 00:16:41.273 --rc genhtml_function_coverage=1 00:16:41.273 --rc genhtml_legend=1 00:16:41.273 --rc geninfo_all_blocks=1 00:16:41.273 --rc geninfo_unexecuted_blocks=1 00:16:41.273 00:16:41.273 ' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:41.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.273 --rc genhtml_branch_coverage=1 00:16:41.273 --rc genhtml_function_coverage=1 00:16:41.273 --rc genhtml_legend=1 00:16:41.273 --rc geninfo_all_blocks=1 00:16:41.273 --rc geninfo_unexecuted_blocks=1 00:16:41.273 00:16:41.273 ' 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72075 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72075 00:16:41.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72075 ']' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.273 22:07:14 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.273 22:07:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.538 [2024-12-06 22:07:14.166868] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:41.538 [2024-12-06 22:07:14.167025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72075 ] 00:16:41.538 [2024-12-06 22:07:14.331515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.800 [2024-12-06 22:07:14.455191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.373 22:07:15 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.373 22:07:15 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:42.373 22:07:15 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:42.373 22:07:15 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:42.373 22:07:15 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:42.373 22:07:15 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:42.373 22:07:15 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:43.515 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:43.516 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:43.516 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:43.516 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:43.516 nvme0n1 00:16:43.516 nvme0n2 00:16:43.516 nvme0n3 00:16:43.516 nvme1n1 00:16:43.516 nvme2n1 00:16:43.516 nvme3n1 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.516 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.516 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c5a90f5c-8250-4982-8eec-daa77f475c76"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5a90f5c-8250-4982-8eec-daa77f475c76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "b2e9e549-a7e0-46fd-8c9b-c37c1b95628d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2e9e549-a7e0-46fd-8c9b-c37c1b95628d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6f18bad5-9f0d-475c-8f42-4b6f3445f0b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f18bad5-9f0d-475c-8f42-4b6f3445f0b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d2079d50-9152-4e37-be56-bf5e79d4d7cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d2079d50-9152-4e37-be56-bf5e79d4d7cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "339a334b-ced9-4dd3-8087-faa2df123e26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "339a334b-ced9-4dd3-8087-faa2df123e26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5d5d0507-68d9-4051-9651-df977cf19993"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5d5d0507-68d9-4051-9651-df977cf19993",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:43.776 22:07:16 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72075 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72075 ']' 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72075 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72075 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.776 killing process with pid 72075 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72075' 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72075 00:16:43.776 22:07:16 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72075 00:16:45.689 22:07:18 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:45.689 22:07:18 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:45.689 22:07:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:45.689 22:07:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.689 22:07:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.689 ************************************ 00:16:45.689 START TEST bdev_hello_world 00:16:45.689 ************************************ 00:16:45.689 22:07:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:45.689 [2024-12-06 22:07:18.272149] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:45.689 [2024-12-06 22:07:18.272315] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72359 ] 00:16:45.689 [2024-12-06 22:07:18.437841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.951 [2024-12-06 22:07:18.567892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.213 [2024-12-06 22:07:18.975583] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:46.213 [2024-12-06 22:07:18.975647] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:46.213 [2024-12-06 22:07:18.975667] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:46.213 [2024-12-06 22:07:18.977835] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:46.213 [2024-12-06 22:07:18.979359] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:46.213 [2024-12-06 22:07:18.979409] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:46.213 [2024-12-06 22:07:18.980028] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:46.213 00:16:46.213 [2024-12-06 22:07:18.980060] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:47.158 00:16:47.158 real 0m1.577s 00:16:47.158 user 0m1.179s 00:16:47.158 sys 0m0.247s 00:16:47.158 22:07:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.158 ************************************ 00:16:47.158 END TEST bdev_hello_world 00:16:47.158 ************************************ 00:16:47.158 22:07:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:47.158 22:07:19 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:47.158 22:07:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.158 22:07:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.158 22:07:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:47.158 ************************************ 00:16:47.158 START TEST bdev_bounds 00:16:47.158 ************************************ 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:47.158 Process bdevio pid: 72396 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72396 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72396' 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72396 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72396 ']' 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.158 22:07:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:47.158 [2024-12-06 22:07:19.920268] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:47.158 [2024-12-06 22:07:19.920409] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72396 ] 00:16:47.418 [2024-12-06 22:07:20.086346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.418 [2024-12-06 22:07:20.215082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.418 [2024-12-06 22:07:20.215440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.418 [2024-12-06 22:07:20.215536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.990 22:07:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.990 22:07:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:47.990 22:07:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:48.251 I/O targets: 00:16:48.251 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:48.251 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:48.251 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:48.251 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:48.251 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:48.251 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:48.251 00:16:48.251 00:16:48.251 CUnit - A unit testing framework for C - Version 2.1-3 00:16:48.251 http://cunit.sourceforge.net/ 00:16:48.251 00:16:48.251 00:16:48.251 Suite: bdevio tests on: nvme3n1 00:16:48.251 Test: blockdev write read block ...passed 00:16:48.251 Test: blockdev write zeroes read block ...passed 00:16:48.251 Test: blockdev write zeroes read no split ...passed 00:16:48.251 Test: blockdev write zeroes read split ...passed 00:16:48.251 Test: blockdev write zeroes read split partial ...passed 00:16:48.251 Test: blockdev reset ...passed 00:16:48.251 Test: blockdev write read 8 blocks ...passed 00:16:48.251 Test: blockdev write read size > 128k ...passed 00:16:48.251 Test: blockdev write read invalid size ...passed 00:16:48.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.251 Test: blockdev write read max offset ...passed 00:16:48.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.251 Test: blockdev writev readv 8 blocks ...passed 00:16:48.251 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.251 Test: blockdev writev readv block ...passed 00:16:48.251 Test: blockdev writev readv size > 128k ...passed 00:16:48.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.251 Test: blockdev comparev and writev ...passed 00:16:48.251 Test: blockdev nvme passthru rw ...passed 00:16:48.251 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.251 Test: blockdev nvme admin passthru ...passed 00:16:48.251 Test: blockdev copy ...passed 00:16:48.251 Suite: bdevio tests on: nvme2n1 00:16:48.251 Test: blockdev write read block ...passed 00:16:48.251 Test: blockdev write zeroes read block ...passed 00:16:48.251 Test: blockdev write zeroes read no split ...passed 00:16:48.252 Test: blockdev write zeroes read split ...passed 00:16:48.252 Test: blockdev write zeroes read split partial ...passed 00:16:48.252 Test: blockdev reset ...passed 00:16:48.252 Test: blockdev write read 8 blocks ...passed 00:16:48.252 Test: blockdev write read size > 128k ...passed 00:16:48.252 Test: blockdev write read invalid size ...passed 00:16:48.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.252 Test: blockdev write read max offset ...passed 00:16:48.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.252 Test: blockdev writev readv 8 blocks ...passed 00:16:48.252 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.252 Test: blockdev writev readv block ...passed 00:16:48.252 Test: blockdev writev readv size > 128k ...passed 00:16:48.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.252 Test: blockdev comparev and writev ...passed 00:16:48.252 Test: blockdev nvme passthru rw ...passed 00:16:48.252 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.252 Test: blockdev nvme admin passthru ...passed 00:16:48.252 Test: blockdev copy ...passed 00:16:48.252 Suite: bdevio tests on: nvme1n1 00:16:48.252 Test: blockdev write read block ...passed 00:16:48.252 Test: blockdev write zeroes read block ...passed 00:16:48.252 Test: blockdev write zeroes read no split ...passed 00:16:48.252 Test: blockdev write zeroes read split ...passed 00:16:48.513 Test: blockdev write zeroes read split partial ...passed 00:16:48.513 Test: blockdev reset ...passed 00:16:48.513 Test: blockdev write read 8 blocks ...passed 00:16:48.513 Test: blockdev write read size > 128k ...passed 00:16:48.513 Test: blockdev write read invalid size ...passed 00:16:48.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.513 Test: blockdev write read max offset ...passed 00:16:48.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.513 Test: blockdev writev readv 8 blocks ...passed 00:16:48.513 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.513 Test: blockdev writev readv block ...passed 00:16:48.513 Test: blockdev writev readv size > 128k ...passed 00:16:48.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.513 Test: blockdev comparev and writev ...passed 00:16:48.513 Test: blockdev nvme passthru rw ...passed 00:16:48.513 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.513 Test: blockdev nvme admin passthru ...passed 00:16:48.513 Test: blockdev copy ...passed 00:16:48.513 Suite: bdevio tests on: nvme0n3 00:16:48.513 Test: blockdev write read block ...passed 00:16:48.513 Test: blockdev write zeroes read block ...passed 00:16:48.513 Test: blockdev write zeroes read no split ...passed 00:16:48.513 Test: blockdev write zeroes read split ...passed 00:16:48.513 Test: blockdev write zeroes read split partial ...passed 00:16:48.513 Test: blockdev reset ...passed 00:16:48.513 Test: blockdev write read 8 blocks ...passed 00:16:48.513 Test: blockdev write read size > 128k ...passed 00:16:48.513 Test: blockdev write read invalid size ...passed 00:16:48.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.513 Test: blockdev write read max offset ...passed 00:16:48.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.513 Test: blockdev writev readv 8 blocks ...passed 00:16:48.513 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.513 Test: blockdev writev readv block ...passed 00:16:48.513 Test: blockdev writev readv size > 128k ...passed 00:16:48.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.513 Test: blockdev comparev and writev ...passed 00:16:48.513 Test: blockdev nvme passthru rw ...passed 00:16:48.513 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.513 Test: blockdev nvme admin passthru ...passed 00:16:48.513 Test: blockdev copy ...passed 00:16:48.513 Suite: bdevio tests on: nvme0n2 00:16:48.513 Test: blockdev write read block ...passed 00:16:48.513 Test: blockdev write zeroes read block ...passed 00:16:48.513 Test: blockdev write zeroes read no split ...passed 00:16:48.513 Test: blockdev write zeroes read split ...passed 00:16:48.513 Test: blockdev write zeroes read split partial ...passed 00:16:48.513 Test: blockdev reset ...passed 00:16:48.513 Test: blockdev write read 8 blocks ...passed 00:16:48.513 Test: blockdev write read size > 128k ...passed 00:16:48.513 Test: blockdev write read invalid size ...passed 00:16:48.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.513 Test: blockdev write read max offset ...passed 00:16:48.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.513 Test: blockdev writev readv 8 blocks ...passed 00:16:48.513 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.513 Test: blockdev writev readv block ...passed 00:16:48.513 Test: blockdev writev readv size > 128k ...passed 00:16:48.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.513 Test: blockdev comparev and writev ...passed 00:16:48.513 Test: blockdev nvme passthru rw ...passed 00:16:48.513 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.513 Test: blockdev nvme admin passthru ...passed 00:16:48.513 Test: blockdev copy ...passed 00:16:48.513 Suite: bdevio tests on: nvme0n1 00:16:48.513 Test: blockdev write read block ...passed 00:16:48.513 Test: blockdev write zeroes read block ...passed 00:16:48.775 Test: blockdev write zeroes read no split ...passed 00:16:48.775 Test: blockdev write zeroes read split ...passed 00:16:48.775 Test: blockdev write zeroes read split partial ...passed 00:16:48.775 Test: blockdev reset ...passed 00:16:48.775 Test: blockdev write read 8 blocks ...passed 00:16:48.775 Test: blockdev write read size > 128k ...passed 00:16:48.775 Test: blockdev write read invalid size ...passed 00:16:48.775 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.775 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.775 Test: blockdev write read max offset ...passed 00:16:48.775 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.775 Test: blockdev writev readv 8 blocks ...passed 00:16:48.775 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.775 Test: blockdev writev readv block ...passed 00:16:48.775 Test: blockdev writev readv size > 128k ...passed 00:16:48.775 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.775 Test: blockdev comparev and writev ...passed 00:16:48.775 Test: blockdev nvme passthru rw ...passed 00:16:48.775 Test: blockdev nvme passthru vendor specific ...passed 00:16:48.775 Test: blockdev nvme admin passthru ...passed 00:16:48.775 Test: blockdev copy ...passed 00:16:48.775 00:16:48.775 Run Summary: Type Total Ran Passed Failed Inactive 00:16:48.775 suites 6 6 n/a 0 0 00:16:48.775 tests 138 138 138 0 0 00:16:48.775 asserts 780 780 780 0 n/a 00:16:48.775 00:16:48.775 Elapsed time = 1.716 seconds 00:16:48.775 0 00:16:48.775 22:07:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72396 00:16:48.775 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72396 ']' 00:16:48.775 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72396 00:16:48.775 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:48.776 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.776 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72396 00:16:49.038 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.038 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.038 killing process with pid 72396 00:16:49.038 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72396' 00:16:49.038 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72396 00:16:49.038 22:07:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72396 00:16:49.984 22:07:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:49.984 00:16:49.984 real 0m2.682s 00:16:49.984 user 0m6.331s 00:16:49.984 sys 0m0.417s 00:16:49.984 22:07:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.984 22:07:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:49.984 ************************************ 00:16:49.984 END TEST bdev_bounds 00:16:49.984 ************************************ 00:16:49.984 22:07:22 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:49.984 22:07:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:49.984 22:07:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.984 22:07:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:49.984 ************************************ 00:16:49.984 START TEST bdev_nbd 00:16:49.984 ************************************ 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:49.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72456 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72456 /var/tmp/spdk-nbd.sock 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72456 ']' 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.984 22:07:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:49.984 [2024-12-06 22:07:22.688276] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:16:49.984 [2024-12-06 22:07:22.688810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.245 [2024-12-06 22:07:22.857371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.245 [2024-12-06 22:07:23.004786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:50.818 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.079 1+0 records in 00:16:51.079 1+0 records out 00:16:51.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000979876 s, 4.2 MB/s 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:51.079 22:07:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.341 1+0 records in 00:16:51.341 1+0 records out 00:16:51.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808336 s, 5.1 MB/s 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:51.341 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.602 1+0 records in 00:16:51.602 1+0 records out 00:16:51.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761998 s, 5.4 MB/s 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:51.602 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.864 1+0 records in 00:16:51.864 1+0 records out 00:16:51.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713389 s, 5.7 MB/s 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:51.864 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.127 1+0 records in 00:16:52.127 1+0 records out 00:16:52.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881426 s, 4.6 MB/s 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:52.127 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.398 1+0 records in 00:16:52.398 1+0 records out 00:16:52.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565834 s, 7.2 MB/s 00:16:52.398 22:07:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:52.398 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:52.398 { 00:16:52.398 "nbd_device": "/dev/nbd0", 00:16:52.398 "bdev_name": "nvme0n1" 00:16:52.398 }, 00:16:52.398 { 00:16:52.398 "nbd_device": "/dev/nbd1", 00:16:52.398 "bdev_name": "nvme0n2" 00:16:52.398 }, 00:16:52.398 { 00:16:52.398 "nbd_device": "/dev/nbd2", 00:16:52.398 "bdev_name": "nvme0n3" 00:16:52.398 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd3", 00:16:52.399 "bdev_name": "nvme1n1" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd4", 00:16:52.399 "bdev_name": "nvme2n1" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd5", 00:16:52.399 "bdev_name": "nvme3n1" 00:16:52.399 } 00:16:52.399 ]' 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd0", 00:16:52.399 "bdev_name": "nvme0n1" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd1", 00:16:52.399 "bdev_name": "nvme0n2" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd2", 00:16:52.399 "bdev_name": "nvme0n3" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd3", 00:16:52.399 "bdev_name": "nvme1n1" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd4", 00:16:52.399 "bdev_name": "nvme2n1" 00:16:52.399 }, 00:16:52.399 { 00:16:52.399 "nbd_device": "/dev/nbd5", 00:16:52.399 "bdev_name": "nvme3n1" 00:16:52.399 } 00:16:52.399 ]' 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.399 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.665 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.924 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.183 22:07:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.443 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.704 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:53.965 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:53.966 22:07:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:54.227 /dev/nbd0 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.227 1+0 records in 00:16:54.227 1+0 records out 00:16:54.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111832 s, 3.7 MB/s 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:54.227 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:54.487 /dev/nbd1 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.487 1+0 records in 00:16:54.487 1+0 records out 00:16:54.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102048 s, 4.0 MB/s 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:54.487 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:54.747 /dev/nbd10 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.747 1+0 records in 00:16:54.747 1+0 records out 00:16:54.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658813 s, 6.2 MB/s 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:54.747 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:55.007 /dev/nbd11 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.007 1+0 records in 00:16:55.007 1+0 records out 00:16:55.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761738 s, 5.4 MB/s 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:55.007 22:07:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:55.269 /dev/nbd12 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.269 1+0 records in 00:16:55.269 1+0 records out 00:16:55.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011052 s, 3.7 MB/s 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:55.269 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:55.530 /dev/nbd13 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.530 1+0 records in 00:16:55.530 1+0 records out 00:16:55.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159502 s, 2.6 MB/s 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:55.530 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:55.791 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd0", 00:16:55.791 "bdev_name": "nvme0n1" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd1", 00:16:55.791 "bdev_name": "nvme0n2" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd10", 00:16:55.791 "bdev_name": "nvme0n3" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd11", 00:16:55.791 "bdev_name": "nvme1n1" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd12", 00:16:55.791 "bdev_name": "nvme2n1" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd13", 00:16:55.791 "bdev_name": "nvme3n1" 00:16:55.791 } 00:16:55.791 ]' 00:16:55.791 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd0", 00:16:55.791 "bdev_name": "nvme0n1" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd1", 00:16:55.791 "bdev_name": "nvme0n2" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd10", 00:16:55.791 "bdev_name": "nvme0n3" 00:16:55.791 }, 00:16:55.791 { 00:16:55.791 "nbd_device": "/dev/nbd11", 00:16:55.791 "bdev_name": "nvme1n1" 00:16:55.792 }, 00:16:55.792 { 00:16:55.792 "nbd_device": "/dev/nbd12", 00:16:55.792 "bdev_name": "nvme2n1" 00:16:55.792 }, 00:16:55.792 { 00:16:55.792 "nbd_device": "/dev/nbd13", 00:16:55.792 "bdev_name": "nvme3n1" 00:16:55.792 } 00:16:55.792 ]' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:55.792 /dev/nbd1 00:16:55.792 /dev/nbd10 00:16:55.792 /dev/nbd11 00:16:55.792 /dev/nbd12 00:16:55.792 /dev/nbd13' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:55.792 /dev/nbd1 00:16:55.792 /dev/nbd10 00:16:55.792 /dev/nbd11 00:16:55.792 /dev/nbd12 00:16:55.792 /dev/nbd13' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:55.792 256+0 records in 00:16:55.792 256+0 records out 00:16:55.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542102 s, 193 MB/s 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:55.792 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:56.053 256+0 records in 00:16:56.053 256+0 records out 00:16:56.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233052 s, 4.5 MB/s 00:16:56.053 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:56.053 22:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:56.314 256+0 records in 00:16:56.314 256+0 records out 00:16:56.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24141 s, 4.3 MB/s 00:16:56.314 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:56.314 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:56.574 256+0 records in 00:16:56.574 256+0 records out 00:16:56.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216998 s, 4.8 MB/s 00:16:56.574 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:56.574 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:56.835 256+0 records in 00:16:56.835 256+0 records out 00:16:56.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24415 s, 4.3 MB/s 00:16:56.835 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:56.835 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:57.097 256+0 records in 00:16:57.097 256+0 records out 00:16:57.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2421 s, 4.3 MB/s 00:16:57.097 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:57.097 22:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:57.359 256+0 records in 00:16:57.359 256+0 records out 00:16:57.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.287413 s, 3.6 MB/s 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.359 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.619 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.880 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.140 22:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.402 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.664 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.925 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:58.925 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.925 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:58.926 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:59.187 malloc_lvol_verify 00:16:59.187 22:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:59.448 7a5a25c9-c061-475a-b1ca-398489e58b82 00:16:59.448 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:59.708 12ce4ddd-363a-4724-ab2f-64b4c30282b6 00:16:59.708 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:59.970 /dev/nbd0 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:59.970 mke2fs 1.47.0 (5-Feb-2023) 00:16:59.970 Discarding device blocks: 0/4096 done 00:16:59.970 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:59.970 00:16:59.970 Allocating group tables: 0/1 done 00:16:59.970 Writing inode tables: 0/1 done 00:16:59.970 Creating journal (1024 blocks): done 00:16:59.970 Writing superblocks and filesystem accounting information: 0/1 done 00:16:59.970 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.970 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72456 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72456 ']' 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72456 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72456 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.234 killing process with pid 72456 00:17:00.234 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72456' 00:17:00.235 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72456 00:17:00.235 22:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72456 00:17:01.203 22:07:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:01.203 00:17:01.203 real 0m11.187s 00:17:01.203 user 0m14.902s 00:17:01.203 sys 0m3.865s 00:17:01.203 ************************************ 00:17:01.203 END TEST bdev_nbd 00:17:01.203 ************************************ 00:17:01.203 22:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.203 22:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 22:07:33 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:01.203 22:07:33 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:01.203 22:07:33 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:01.203 22:07:33 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:01.203 22:07:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.203 22:07:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.203 22:07:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 ************************************ 00:17:01.203 START TEST bdev_fio 00:17:01.203 ************************************ 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:01.203 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:01.203 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:01.204 ************************************ 00:17:01.204 START TEST bdev_fio_rw_verify 00:17:01.204 ************************************ 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:01.204 22:07:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:01.466 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:01.466 fio-3.35 00:17:01.466 Starting 6 threads 00:17:13.704 00:17:13.704 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72868: Fri Dec 6 22:07:44 2024 00:17:13.704 read: IOPS=13.6k, BW=53.1MiB/s (55.7MB/s)(532MiB/10002msec) 00:17:13.704 slat (usec): min=2, max=2314, avg= 7.48, stdev=17.62 00:17:13.704 clat (usec): min=95, max=6973, avg=1408.17, stdev=742.32 00:17:13.704 lat (usec): min=100, max=6982, avg=1415.65, stdev=742.94 00:17:13.704 clat percentiles (usec): 00:17:13.704 | 50.000th=[ 1319], 99.000th=[ 3687], 99.900th=[ 5145], 99.990th=[ 6456], 00:17:13.704 | 99.999th=[ 6980] 00:17:13.705 write: IOPS=13.9k, BW=54.4MiB/s (57.1MB/s)(544MiB/10002msec); 0 zone resets 00:17:13.705 slat (usec): min=13, max=5513, avg=44.54, stdev=148.93 00:17:13.705 clat (usec): min=116, max=9790, avg=1728.68, stdev=827.61 00:17:13.705 lat (usec): min=133, max=9827, avg=1773.22, stdev=840.84 00:17:13.705 clat percentiles (usec): 00:17:13.705 | 50.000th=[ 1598], 99.000th=[ 4293], 99.900th=[ 5735], 99.990th=[ 7767], 00:17:13.705 | 99.999th=[ 9765] 00:17:13.705 bw ( KiB/s): min=48685, max=64772, per=99.96%, avg=55708.37, stdev=894.99, samples=114 00:17:13.705 iops : min=12169, max=16193, avg=13926.26, stdev=223.78, samples=114 00:17:13.705 lat (usec) : 100=0.01%, 250=0.99%, 500=4.89%, 750=7.55%, 1000=10.82% 00:17:13.705 lat (msec) : 2=50.88%, 4=23.82%, 10=1.05% 00:17:13.705 cpu : usr=42.22%, sys=33.36%, ctx=5548, majf=0, minf=14123 00:17:13.705 IO depths : 1=11.0%, 2=23.4%, 4=51.5%, 8=14.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.705 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.705 issued rwts: total=136067,139351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.705 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:13.705 00:17:13.705 Run status group 0 (all jobs): 00:17:13.705 READ: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=532MiB (557MB), run=10002-10002msec 00:17:13.705 WRITE: bw=54.4MiB/s (57.1MB/s), 54.4MiB/s-54.4MiB/s (57.1MB/s-57.1MB/s), io=544MiB (571MB), run=10002-10002msec 00:17:13.705 ----------------------------------------------------- 00:17:13.705 Suppressions used: 00:17:13.705 count bytes template 00:17:13.705 6 48 /usr/src/fio/parse.c 00:17:13.705 3187 305952 /usr/src/fio/iolog.c 00:17:13.705 1 8 libtcmalloc_minimal.so 00:17:13.705 1 904 libcrypto.so 00:17:13.705 ----------------------------------------------------- 00:17:13.705 00:17:13.705 ************************************ 00:17:13.705 END TEST bdev_fio_rw_verify 00:17:13.705 ************************************ 00:17:13.705 00:17:13.705 real 0m12.103s 00:17:13.705 user 0m26.913s 00:17:13.705 sys 0m20.395s 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c5a90f5c-8250-4982-8eec-daa77f475c76"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5a90f5c-8250-4982-8eec-daa77f475c76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "b2e9e549-a7e0-46fd-8c9b-c37c1b95628d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2e9e549-a7e0-46fd-8c9b-c37c1b95628d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6f18bad5-9f0d-475c-8f42-4b6f3445f0b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f18bad5-9f0d-475c-8f42-4b6f3445f0b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d2079d50-9152-4e37-be56-bf5e79d4d7cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d2079d50-9152-4e37-be56-bf5e79d4d7cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "339a334b-ced9-4dd3-8087-faa2df123e26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "339a334b-ced9-4dd3-8087-faa2df123e26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5d5d0507-68d9-4051-9651-df977cf19993"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5d5d0507-68d9-4051-9651-df977cf19993",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:13.705 /home/vagrant/spdk_repo/spdk 00:17:13.705 ************************************ 00:17:13.705 END TEST bdev_fio 00:17:13.705 ************************************ 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:13.705 00:17:13.705 real 0m12.290s 00:17:13.705 user 0m26.999s 00:17:13.705 sys 0m20.473s 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.705 22:07:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:13.705 22:07:46 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:13.705 22:07:46 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:13.705 22:07:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:13.705 22:07:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.705 22:07:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.705 ************************************ 00:17:13.705 START TEST bdev_verify 00:17:13.705 ************************************ 00:17:13.705 22:07:46 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:13.705 [2024-12-06 22:07:46.306045] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:13.705 [2024-12-06 22:07:46.306440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:17:13.705 [2024-12-06 22:07:46.475997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.966 [2024-12-06 22:07:46.599358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.966 [2024-12-06 22:07:46.599484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.227 Running I/O for 5 seconds... 00:17:16.574 24448.00 IOPS, 95.50 MiB/s [2024-12-06T22:07:50.391Z] 24416.00 IOPS, 95.38 MiB/s [2024-12-06T22:07:51.336Z] 24213.33 IOPS, 94.58 MiB/s [2024-12-06T22:07:52.278Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-06T22:07:52.278Z] 23379.20 IOPS, 91.33 MiB/s 00:17:19.406 Latency(us) 00:17:19.406 [2024-12-06T22:07:52.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.406 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x0 length 0x80000 00:17:19.406 nvme0n1 : 5.02 1707.18 6.67 0.00 0.00 74822.28 9981.64 72997.02 00:17:19.406 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x80000 length 0x80000 00:17:19.406 nvme0n1 : 5.04 1979.48 7.73 0.00 0.00 64555.98 8469.27 54445.29 00:17:19.406 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x0 length 0x80000 00:17:19.406 nvme0n2 : 5.07 1692.53 6.61 0.00 0.00 75300.01 7410.61 66140.95 00:17:19.406 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x80000 length 0x80000 00:17:19.406 nvme0n2 : 5.06 1973.69 7.71 0.00 0.00 64633.76 11594.83 54445.29 00:17:19.406 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x0 length 0x80000 00:17:19.406 nvme0n3 : 5.05 1671.54 6.53 0.00 0.00 76074.91 14619.57 66947.54 00:17:19.406 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x80000 length 0x80000 00:17:19.406 nvme0n3 : 5.07 1969.56 7.69 0.00 0.00 64659.78 12300.60 62107.96 00:17:19.406 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x0 length 0xa0000 00:17:19.406 nvme1n1 : 5.08 1688.95 6.60 0.00 0.00 75115.22 8418.86 70980.53 00:17:19.406 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0xa0000 length 0xa0000 00:17:19.406 nvme1n1 : 5.05 1951.62 7.62 0.00 0.00 65144.53 14922.04 55655.19 00:17:19.406 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x0 length 0x20000 00:17:19.406 nvme2n1 : 5.08 1687.41 6.59 0.00 0.00 75030.61 7713.08 77030.01 00:17:19.406 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.406 Verification LBA range: start 0x20000 length 0x20000 00:17:19.406 nvme2n1 : 5.05 1976.38 7.72 0.00 0.00 64215.13 7461.02 63317.86 00:17:19.406 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.407 Verification LBA range: start 0x0 length 0xbd0bd 00:17:19.407 nvme3n1 : 5.08 2274.49 8.88 0.00 0.00 55476.00 4839.58 62914.56 00:17:19.407 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.407 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:19.407 nvme3n1 : 5.07 2602.66 10.17 0.00 0.00 48581.40 6704.84 53638.70 00:17:19.407 [2024-12-06T22:07:52.279Z] =================================================================================================================== 00:17:19.407 [2024-12-06T22:07:52.279Z] Total : 23175.49 90.53 0.00 0.00 65805.92 4839.58 77030.01 00:17:20.351 ************************************ 00:17:20.351 END TEST bdev_verify 00:17:20.351 ************************************ 00:17:20.351 00:17:20.351 real 0m6.786s 00:17:20.351 user 0m10.871s 00:17:20.351 sys 0m1.555s 00:17:20.351 22:07:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.351 22:07:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:20.351 22:07:53 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:20.351 22:07:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:20.351 22:07:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.351 22:07:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.351 ************************************ 00:17:20.351 START TEST bdev_verify_big_io 00:17:20.351 ************************************ 00:17:20.351 22:07:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:20.351 [2024-12-06 22:07:53.161020] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:20.351 [2024-12-06 22:07:53.161207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73140 ] 00:17:20.613 [2024-12-06 22:07:53.331041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:20.613 [2024-12-06 22:07:53.448890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.613 [2024-12-06 22:07:53.448983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.184 Running I/O for 5 seconds... 00:17:27.084 1598.00 IOPS, 99.88 MiB/s [2024-12-06T22:07:59.956Z] 2491.00 IOPS, 155.69 MiB/s [2024-12-06T22:08:00.217Z] 2972.00 IOPS, 185.75 MiB/s 00:17:27.345 Latency(us) 00:17:27.345 [2024-12-06T22:08:00.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.345 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0x8000 00:17:27.345 nvme0n1 : 5.86 125.70 7.86 0.00 0.00 988637.55 28230.89 1051802.39 00:17:27.345 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x8000 length 0x8000 00:17:27.345 nvme0n1 : 5.93 129.56 8.10 0.00 0.00 966768.74 6604.01 1013085.74 00:17:27.345 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0x8000 00:17:27.345 nvme0n2 : 5.92 127.10 7.94 0.00 0.00 956008.81 57268.38 1103424.59 00:17:27.345 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x8000 length 0x8000 00:17:27.345 nvme0n2 : 5.78 132.80 8.30 0.00 0.00 907554.40 115343.36 845313.58 00:17:27.345 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0x8000 00:17:27.345 nvme0n3 : 5.95 118.39 7.40 0.00 0.00 990428.84 84289.38 1374441.16 00:17:27.345 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x8000 length 0x8000 00:17:27.345 nvme0n3 : 5.85 139.57 8.72 0.00 0.00 846112.76 57671.68 1006632.96 00:17:27.345 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0xa000 00:17:27.345 nvme1n1 : 5.86 111.87 6.99 0.00 0.00 1014271.12 4738.76 1438968.91 00:17:27.345 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0xa000 length 0xa000 00:17:27.345 nvme1n1 : 5.92 100.07 6.25 0.00 0.00 1133724.77 56461.78 2774693.42 00:17:27.345 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0x2000 00:17:27.345 nvme2n1 : 5.92 151.29 9.46 0.00 0.00 720595.16 16333.59 987274.63 00:17:27.345 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x2000 length 0x2000 00:17:27.345 nvme2n1 : 5.91 113.77 7.11 0.00 0.00 969600.92 110503.78 1935832.62 00:17:27.345 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0x0 length 0xbd0b 00:17:27.345 nvme3n1 : 5.96 149.67 9.35 0.00 0.00 712468.78 4663.14 1290555.08 00:17:27.345 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:27.345 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:27.345 nvme3n1 : 5.93 184.84 11.55 0.00 0.00 582210.34 7410.61 1503496.66 00:17:27.345 [2024-12-06T22:08:00.217Z] =================================================================================================================== 00:17:27.345 [2024-12-06T22:08:00.217Z] Total : 1584.63 99.04 0.00 0.00 875171.44 4663.14 2774693.42 00:17:27.917 00:17:27.917 real 0m7.591s 00:17:27.917 user 0m13.908s 00:17:27.917 sys 0m0.460s 00:17:27.917 22:08:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.917 ************************************ 00:17:27.917 22:08:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 END TEST bdev_verify_big_io 00:17:27.917 ************************************ 00:17:27.917 22:08:00 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.917 22:08:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:27.917 22:08:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.917 22:08:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.917 ************************************ 00:17:27.917 START TEST bdev_write_zeroes 00:17:27.917 ************************************ 00:17:27.917 22:08:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:28.178 [2024-12-06 22:08:00.786760] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:28.178 [2024-12-06 22:08:00.786881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73255 ] 00:17:28.178 [2024-12-06 22:08:00.946115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.178 [2024-12-06 22:08:01.026420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.750 Running I/O for 1 seconds... 00:17:29.690 75648.00 IOPS, 295.50 MiB/s 00:17:29.690 Latency(us) 00:17:29.690 [2024-12-06T22:08:02.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.690 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme0n1 : 1.01 11505.77 44.94 0.00 0.00 11115.40 5394.12 18551.73 00:17:29.690 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme0n2 : 1.01 11492.67 44.89 0.00 0.00 11121.54 5394.12 18551.73 00:17:29.690 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme0n3 : 1.01 11605.19 45.33 0.00 0.00 11007.15 5091.64 18551.73 00:17:29.690 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme1n1 : 1.02 11567.54 45.19 0.00 0.00 11036.58 5368.91 17140.18 00:17:29.690 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme2n1 : 1.02 11554.57 45.14 0.00 0.00 11042.85 5444.53 17140.18 00:17:29.690 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:29.690 nvme3n1 : 1.02 17750.40 69.34 0.00 0.00 7181.70 3428.04 17341.83 00:17:29.690 [2024-12-06T22:08:02.562Z] =================================================================================================================== 00:17:29.690 [2024-12-06T22:08:02.562Z] Total : 75476.13 294.83 0.00 0.00 10150.49 3428.04 18551.73 00:17:30.633 ************************************ 00:17:30.633 END TEST bdev_write_zeroes 00:17:30.633 ************************************ 00:17:30.633 00:17:30.633 real 0m2.508s 00:17:30.633 user 0m1.827s 00:17:30.633 sys 0m0.516s 00:17:30.633 22:08:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.634 22:08:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 22:08:03 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:30.634 22:08:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:30.634 22:08:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.634 22:08:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 ************************************ 00:17:30.634 START TEST bdev_json_nonenclosed 00:17:30.634 ************************************ 00:17:30.634 22:08:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:30.634 [2024-12-06 22:08:03.383613] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:30.634 [2024-12-06 22:08:03.384052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ] 00:17:30.895 [2024-12-06 22:08:03.554286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.895 [2024-12-06 22:08:03.699511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.895 [2024-12-06 22:08:03.699629] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:30.895 [2024-12-06 22:08:03.699651] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:30.895 [2024-12-06 22:08:03.699664] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:31.155 00:17:31.155 real 0m0.612s 00:17:31.155 user 0m0.364s 00:17:31.155 sys 0m0.139s 00:17:31.155 22:08:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.155 ************************************ 00:17:31.155 END TEST bdev_json_nonenclosed 00:17:31.155 ************************************ 00:17:31.155 22:08:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:31.155 22:08:03 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:31.155 22:08:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:31.155 22:08:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.155 22:08:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:31.155 ************************************ 00:17:31.155 START TEST bdev_json_nonarray 00:17:31.155 ************************************ 00:17:31.155 22:08:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:31.416 [2024-12-06 22:08:04.048765] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:31.416 [2024-12-06 22:08:04.048950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73328 ] 00:17:31.416 [2024-12-06 22:08:04.218518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.677 [2024-12-06 22:08:04.352732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.677 [2024-12-06 22:08:04.352871] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:31.677 [2024-12-06 22:08:04.352894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:31.677 [2024-12-06 22:08:04.352906] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:31.936 00:17:31.936 real 0m0.583s 00:17:31.936 user 0m0.356s 00:17:31.936 sys 0m0.121s 00:17:31.936 ************************************ 00:17:31.936 END TEST bdev_json_nonarray 00:17:31.936 ************************************ 00:17:31.936 22:08:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.936 22:08:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:31.936 22:08:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:32.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.711 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.711 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.711 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.972 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.972 00:17:36.972 real 0m55.748s 00:17:36.972 user 1m21.846s 00:17:36.972 sys 0m38.173s 00:17:36.972 ************************************ 00:17:36.972 END TEST blockdev_xnvme 00:17:36.972 ************************************ 00:17:36.972 22:08:09 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.972 22:08:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:36.972 22:08:09 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:36.972 22:08:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.972 22:08:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.972 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:17:36.972 ************************************ 00:17:36.972 START TEST ublk 00:17:36.972 ************************************ 00:17:36.972 22:08:09 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:36.972 * Looking for test storage... 00:17:36.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:36.972 22:08:09 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.972 22:08:09 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.972 22:08:09 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.234 22:08:09 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.234 22:08:09 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.234 22:08:09 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.234 22:08:09 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.234 22:08:09 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.234 22:08:09 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.234 22:08:09 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:37.234 22:08:09 ublk -- scripts/common.sh@345 -- # : 1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.234 22:08:09 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.234 22:08:09 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@353 -- # local d=1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.234 22:08:09 ublk -- scripts/common.sh@355 -- # echo 1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.234 22:08:09 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@353 -- # local d=2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.234 22:08:09 ublk -- scripts/common.sh@355 -- # echo 2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.234 22:08:09 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.234 22:08:09 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.234 22:08:09 ublk -- scripts/common.sh@368 -- # return 0 00:17:37.234 22:08:09 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.234 22:08:09 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.234 --rc genhtml_branch_coverage=1 00:17:37.234 --rc genhtml_function_coverage=1 00:17:37.234 --rc genhtml_legend=1 00:17:37.234 --rc geninfo_all_blocks=1 00:17:37.234 --rc geninfo_unexecuted_blocks=1 00:17:37.234 00:17:37.234 ' 00:17:37.234 22:08:09 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.235 --rc genhtml_branch_coverage=1 00:17:37.235 --rc genhtml_function_coverage=1 00:17:37.235 --rc genhtml_legend=1 00:17:37.235 --rc geninfo_all_blocks=1 00:17:37.235 --rc geninfo_unexecuted_blocks=1 00:17:37.235 00:17:37.235 ' 00:17:37.235 22:08:09 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.235 --rc genhtml_branch_coverage=1 00:17:37.235 --rc genhtml_function_coverage=1 00:17:37.235 --rc genhtml_legend=1 00:17:37.235 --rc geninfo_all_blocks=1 00:17:37.235 --rc geninfo_unexecuted_blocks=1 00:17:37.235 00:17:37.235 ' 00:17:37.235 22:08:09 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.235 --rc genhtml_branch_coverage=1 00:17:37.235 --rc genhtml_function_coverage=1 00:17:37.235 --rc genhtml_legend=1 00:17:37.235 --rc geninfo_all_blocks=1 00:17:37.235 --rc geninfo_unexecuted_blocks=1 00:17:37.235 00:17:37.235 ' 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:37.235 22:08:09 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:37.235 22:08:09 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:37.235 22:08:09 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:37.235 22:08:09 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:37.235 22:08:09 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:37.235 22:08:09 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:37.235 22:08:09 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:37.235 22:08:09 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:37.235 22:08:09 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:37.235 22:08:09 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.235 22:08:09 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.235 22:08:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:37.235 ************************************ 00:17:37.235 START TEST test_save_ublk_config 00:17:37.235 ************************************ 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:37.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73629 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73629 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73629 ']' 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.235 22:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:37.235 [2024-12-06 22:08:10.002143] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:37.235 [2024-12-06 22:08:10.002533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73629 ] 00:17:37.496 [2024-12-06 22:08:10.169096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.496 [2024-12-06 22:08:10.293449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:38.441 [2024-12-06 22:08:11.031200] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:38.441 [2024-12-06 22:08:11.032102] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:38.441 malloc0 00:17:38.441 [2024-12-06 22:08:11.103348] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:38.441 [2024-12-06 22:08:11.103452] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:38.441 [2024-12-06 22:08:11.103463] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:38.441 [2024-12-06 22:08:11.103471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:38.441 [2024-12-06 22:08:11.112308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:38.441 [2024-12-06 22:08:11.112343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:38.441 [2024-12-06 22:08:11.114493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:38.441 [2024-12-06 22:08:11.114625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:38.441 [2024-12-06 22:08:11.127265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:38.441 0 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.441 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:38.702 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.702 22:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:38.702 "subsystems": [ 00:17:38.702 { 00:17:38.702 "subsystem": "fsdev", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "fsdev_set_opts", 00:17:38.702 "params": { 00:17:38.702 "fsdev_io_pool_size": 65535, 00:17:38.702 "fsdev_io_cache_size": 256 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "keyring", 00:17:38.702 "config": [] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "iobuf", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "iobuf_set_options", 00:17:38.702 "params": { 00:17:38.702 "small_pool_count": 8192, 00:17:38.702 "large_pool_count": 1024, 00:17:38.702 "small_bufsize": 8192, 00:17:38.702 "large_bufsize": 135168, 00:17:38.702 "enable_numa": false 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "sock", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "sock_set_default_impl", 00:17:38.702 "params": { 00:17:38.702 "impl_name": "posix" 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "sock_impl_set_options", 00:17:38.702 "params": { 00:17:38.702 "impl_name": "ssl", 00:17:38.702 "recv_buf_size": 4096, 00:17:38.702 "send_buf_size": 4096, 00:17:38.702 "enable_recv_pipe": true, 00:17:38.702 "enable_quickack": false, 00:17:38.702 "enable_placement_id": 0, 00:17:38.702 "enable_zerocopy_send_server": true, 00:17:38.702 "enable_zerocopy_send_client": false, 00:17:38.702 "zerocopy_threshold": 0, 00:17:38.702 "tls_version": 0, 00:17:38.702 "enable_ktls": false 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "sock_impl_set_options", 00:17:38.702 "params": { 00:17:38.702 "impl_name": "posix", 00:17:38.702 "recv_buf_size": 2097152, 00:17:38.702 "send_buf_size": 2097152, 00:17:38.702 "enable_recv_pipe": true, 00:17:38.702 "enable_quickack": false, 00:17:38.702 "enable_placement_id": 0, 00:17:38.702 "enable_zerocopy_send_server": true, 00:17:38.702 "enable_zerocopy_send_client": false, 00:17:38.702 "zerocopy_threshold": 0, 00:17:38.702 "tls_version": 0, 00:17:38.702 "enable_ktls": false 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "vmd", 00:17:38.702 "config": [] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "accel", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "accel_set_options", 00:17:38.702 "params": { 00:17:38.702 "small_cache_size": 128, 00:17:38.702 "large_cache_size": 16, 00:17:38.702 "task_count": 2048, 00:17:38.702 "sequence_count": 2048, 00:17:38.702 "buf_count": 2048 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "bdev", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "bdev_set_options", 00:17:38.702 "params": { 00:17:38.702 "bdev_io_pool_size": 65535, 00:17:38.702 "bdev_io_cache_size": 256, 00:17:38.702 "bdev_auto_examine": true, 00:17:38.702 "iobuf_small_cache_size": 128, 00:17:38.702 "iobuf_large_cache_size": 16 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_raid_set_options", 00:17:38.702 "params": { 00:17:38.702 "process_window_size_kb": 1024, 00:17:38.702 "process_max_bandwidth_mb_sec": 0 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_iscsi_set_options", 00:17:38.702 "params": { 00:17:38.702 "timeout_sec": 30 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_nvme_set_options", 00:17:38.702 "params": { 00:17:38.702 "action_on_timeout": "none", 00:17:38.702 "timeout_us": 0, 00:17:38.702 "timeout_admin_us": 0, 00:17:38.702 "keep_alive_timeout_ms": 10000, 00:17:38.702 "arbitration_burst": 0, 00:17:38.702 "low_priority_weight": 0, 00:17:38.702 "medium_priority_weight": 0, 00:17:38.702 "high_priority_weight": 0, 00:17:38.702 "nvme_adminq_poll_period_us": 10000, 00:17:38.702 "nvme_ioq_poll_period_us": 0, 00:17:38.702 "io_queue_requests": 0, 00:17:38.702 "delay_cmd_submit": true, 00:17:38.702 "transport_retry_count": 4, 00:17:38.702 "bdev_retry_count": 3, 00:17:38.702 "transport_ack_timeout": 0, 00:17:38.702 "ctrlr_loss_timeout_sec": 0, 00:17:38.702 "reconnect_delay_sec": 0, 00:17:38.702 "fast_io_fail_timeout_sec": 0, 00:17:38.702 "disable_auto_failback": false, 00:17:38.702 "generate_uuids": false, 00:17:38.702 "transport_tos": 0, 00:17:38.702 "nvme_error_stat": false, 00:17:38.702 "rdma_srq_size": 0, 00:17:38.702 "io_path_stat": false, 00:17:38.702 "allow_accel_sequence": false, 00:17:38.702 "rdma_max_cq_size": 0, 00:17:38.702 "rdma_cm_event_timeout_ms": 0, 00:17:38.702 "dhchap_digests": [ 00:17:38.702 "sha256", 00:17:38.702 "sha384", 00:17:38.702 "sha512" 00:17:38.702 ], 00:17:38.702 "dhchap_dhgroups": [ 00:17:38.702 "null", 00:17:38.702 "ffdhe2048", 00:17:38.702 "ffdhe3072", 00:17:38.702 "ffdhe4096", 00:17:38.702 "ffdhe6144", 00:17:38.702 "ffdhe8192" 00:17:38.702 ] 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_nvme_set_hotplug", 00:17:38.702 "params": { 00:17:38.702 "period_us": 100000, 00:17:38.702 "enable": false 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_malloc_create", 00:17:38.702 "params": { 00:17:38.702 "name": "malloc0", 00:17:38.702 "num_blocks": 8192, 00:17:38.702 "block_size": 4096, 00:17:38.702 "physical_block_size": 4096, 00:17:38.702 "uuid": "277c7f65-928c-40bc-877c-02998b4cf804", 00:17:38.702 "optimal_io_boundary": 0, 00:17:38.702 "md_size": 0, 00:17:38.702 "dif_type": 0, 00:17:38.702 "dif_is_head_of_md": false, 00:17:38.702 "dif_pi_format": 0 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "bdev_wait_for_examine" 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "scsi", 00:17:38.702 "config": null 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "scheduler", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "framework_set_scheduler", 00:17:38.702 "params": { 00:17:38.702 "name": "static" 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "vhost_scsi", 00:17:38.702 "config": [] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "vhost_blk", 00:17:38.702 "config": [] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "ublk", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "ublk_create_target", 00:17:38.702 "params": { 00:17:38.702 "cpumask": "1" 00:17:38.702 } 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "method": "ublk_start_disk", 00:17:38.702 "params": { 00:17:38.702 "bdev_name": "malloc0", 00:17:38.702 "ublk_id": 0, 00:17:38.702 "num_queues": 1, 00:17:38.702 "queue_depth": 128 00:17:38.702 } 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "nbd", 00:17:38.702 "config": [] 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "subsystem": "nvmf", 00:17:38.702 "config": [ 00:17:38.702 { 00:17:38.702 "method": "nvmf_set_config", 00:17:38.702 "params": { 00:17:38.702 "discovery_filter": "match_any", 00:17:38.703 "admin_cmd_passthru": { 00:17:38.703 "identify_ctrlr": false 00:17:38.703 }, 00:17:38.703 "dhchap_digests": [ 00:17:38.703 "sha256", 00:17:38.703 "sha384", 00:17:38.703 "sha512" 00:17:38.703 ], 00:17:38.703 "dhchap_dhgroups": [ 00:17:38.703 "null", 00:17:38.703 "ffdhe2048", 00:17:38.703 "ffdhe3072", 00:17:38.703 "ffdhe4096", 00:17:38.703 "ffdhe6144", 00:17:38.703 "ffdhe8192" 00:17:38.703 ] 00:17:38.703 } 00:17:38.703 }, 00:17:38.703 { 00:17:38.703 "method": "nvmf_set_max_subsystems", 00:17:38.703 "params": { 00:17:38.703 "max_subsystems": 1024 00:17:38.703 } 00:17:38.703 }, 00:17:38.703 { 00:17:38.703 "method": "nvmf_set_crdt", 00:17:38.703 "params": { 00:17:38.703 "crdt1": 0, 00:17:38.703 "crdt2": 0, 00:17:38.703 "crdt3": 0 00:17:38.703 } 00:17:38.703 } 00:17:38.703 ] 00:17:38.703 }, 00:17:38.703 { 00:17:38.703 "subsystem": "iscsi", 00:17:38.703 "config": [ 00:17:38.703 { 00:17:38.703 "method": "iscsi_set_options", 00:17:38.703 "params": { 00:17:38.703 "node_base": "iqn.2016-06.io.spdk", 00:17:38.703 "max_sessions": 128, 00:17:38.703 "max_connections_per_session": 2, 00:17:38.703 "max_queue_depth": 64, 00:17:38.703 "default_time2wait": 2, 00:17:38.703 "default_time2retain": 20, 00:17:38.703 "first_burst_length": 8192, 00:17:38.703 "immediate_data": true, 00:17:38.703 "allow_duplicated_isid": false, 00:17:38.703 "error_recovery_level": 0, 00:17:38.703 "nop_timeout": 60, 00:17:38.703 "nop_in_interval": 30, 00:17:38.703 "disable_chap": false, 00:17:38.703 "require_chap": false, 00:17:38.703 "mutual_chap": false, 00:17:38.703 "chap_group": 0, 00:17:38.703 "max_large_datain_per_connection": 64, 00:17:38.703 "max_r2t_per_connection": 4, 00:17:38.703 "pdu_pool_size": 36864, 00:17:38.703 "immediate_data_pool_size": 16384, 00:17:38.703 "data_out_pool_size": 2048 00:17:38.703 } 00:17:38.703 } 00:17:38.703 ] 00:17:38.703 } 00:17:38.703 ] 00:17:38.703 }' 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73629 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73629 ']' 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73629 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73629 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73629' 00:17:38.703 killing process with pid 73629 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73629 00:17:38.703 22:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73629 00:17:40.086 [2024-12-06 22:08:12.815143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:40.086 [2024-12-06 22:08:12.846328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:40.086 [2024-12-06 22:08:12.846484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:40.086 [2024-12-06 22:08:12.851468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:40.086 [2024-12-06 22:08:12.851535] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:40.086 [2024-12-06 22:08:12.851550] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:40.086 [2024-12-06 22:08:12.851578] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:40.086 [2024-12-06 22:08:12.851772] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73686 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73686 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73686 ']' 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:41.472 22:08:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:41.472 "subsystems": [ 00:17:41.472 { 00:17:41.472 "subsystem": "fsdev", 00:17:41.472 "config": [ 00:17:41.472 { 00:17:41.472 "method": "fsdev_set_opts", 00:17:41.472 "params": { 00:17:41.472 "fsdev_io_pool_size": 65535, 00:17:41.472 "fsdev_io_cache_size": 256 00:17:41.472 } 00:17:41.472 } 00:17:41.472 ] 00:17:41.472 }, 00:17:41.472 { 00:17:41.473 "subsystem": "keyring", 00:17:41.473 "config": [] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "iobuf", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "iobuf_set_options", 00:17:41.473 "params": { 00:17:41.473 "small_pool_count": 8192, 00:17:41.473 "large_pool_count": 1024, 00:17:41.473 "small_bufsize": 8192, 00:17:41.473 "large_bufsize": 135168, 00:17:41.473 "enable_numa": false 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "sock", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "sock_set_default_impl", 00:17:41.473 "params": { 00:17:41.473 "impl_name": "posix" 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "sock_impl_set_options", 00:17:41.473 "params": { 00:17:41.473 "impl_name": "ssl", 00:17:41.473 "recv_buf_size": 4096, 00:17:41.473 "send_buf_size": 4096, 00:17:41.473 "enable_recv_pipe": true, 00:17:41.473 "enable_quickack": false, 00:17:41.473 "enable_placement_id": 0, 00:17:41.473 "enable_zerocopy_send_server": true, 00:17:41.473 "enable_zerocopy_send_client": false, 00:17:41.473 "zerocopy_threshold": 0, 00:17:41.473 "tls_version": 0, 00:17:41.473 "enable_ktls": false 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "sock_impl_set_options", 00:17:41.473 "params": { 00:17:41.473 "impl_name": "posix", 00:17:41.473 "recv_buf_size": 2097152, 00:17:41.473 "send_buf_size": 2097152, 00:17:41.473 "enable_recv_pipe": true, 00:17:41.473 "enable_quickack": false, 00:17:41.473 "enable_placement_id": 0, 00:17:41.473 "enable_zerocopy_send_server": true, 00:17:41.473 "enable_zerocopy_send_client": false, 00:17:41.473 "zerocopy_threshold": 0, 00:17:41.473 "tls_version": 0, 00:17:41.473 "enable_ktls": false 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "vmd", 00:17:41.473 "config": [] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "accel", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "accel_set_options", 00:17:41.473 "params": { 00:17:41.473 "small_cache_size": 128, 00:17:41.473 "large_cache_size": 16, 00:17:41.473 "task_count": 2048, 00:17:41.473 "sequence_count": 2048, 00:17:41.473 "buf_count": 2048 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "bdev", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "bdev_set_options", 00:17:41.473 "params": { 00:17:41.473 "bdev_io_pool_size": 65535, 00:17:41.473 "bdev_io_cache_size": 256, 00:17:41.473 "bdev_auto_examine": true, 00:17:41.473 "iobuf_small_cache_size": 128, 00:17:41.473 "iobuf_large_cache_size": 16 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_raid_set_options", 00:17:41.473 "params": { 00:17:41.473 "process_window_size_kb": 1024, 00:17:41.473 "process_max_bandwidth_mb_sec": 0 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_iscsi_set_options", 00:17:41.473 "params": { 00:17:41.473 "timeout_sec": 30 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_nvme_set_options", 00:17:41.473 "params": { 00:17:41.473 "action_on_timeout": "none", 00:17:41.473 "timeout_us": 0, 00:17:41.473 "timeout_admin_us": 0, 00:17:41.473 "keep_alive_timeout_ms": 10000, 00:17:41.473 "arbitration_burst": 0, 00:17:41.473 "low_priority_weight": 0, 00:17:41.473 "medium_priority_weight": 0, 00:17:41.473 "high_priority_weight": 0, 00:17:41.473 "nvme_adminq_poll_period_us": 10000, 00:17:41.473 "nvme_ioq_poll_period_us": 0, 00:17:41.473 "io_queue_requests": 0, 00:17:41.473 "delay_cmd_submit": true, 00:17:41.473 "transport_retry_count": 4, 00:17:41.473 "bdev_retry_count": 3, 00:17:41.473 "transport_ack_timeout": 0, 00:17:41.473 "ctrlr_loss_timeout_sec": 0, 00:17:41.473 "reconnect_delay_sec": 0, 00:17:41.473 "fast_io_fail_timeout_sec": 0, 00:17:41.473 "disable_auto_failback": false, 00:17:41.473 "generate_uuids": false, 00:17:41.473 "transport_tos": 0, 00:17:41.473 "nvme_error_stat": false, 00:17:41.473 "rdma_srq_size": 0, 00:17:41.473 "io_path_stat": false, 00:17:41.473 "allow_accel_sequence": false, 00:17:41.473 "rdma_max_cq_size": 0, 00:17:41.473 "rdma_cm_event_timeout_ms": 0, 00:17:41.473 "dhchap_digests": [ 00:17:41.473 "sha256", 00:17:41.473 "sha384", 00:17:41.473 "sha512" 00:17:41.473 ], 00:17:41.473 "dhchap_dhgroups": [ 00:17:41.473 "null", 00:17:41.473 "ffdhe2048", 00:17:41.473 "ffdhe3072", 00:17:41.473 "ffdhe4096", 00:17:41.473 "ffdhe6144", 00:17:41.473 "ffdhe8192" 00:17:41.473 ] 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_nvme_set_hotplug", 00:17:41.473 "params": { 00:17:41.473 "period_us": 100000, 00:17:41.473 "enable": false 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_malloc_create", 00:17:41.473 "params": { 00:17:41.473 "name": "malloc0", 00:17:41.473 "num_blocks": 8192, 00:17:41.473 "block_size": 4096, 00:17:41.473 "physical_block_size": 4096, 00:17:41.473 "uuid": "277c7f65-928c-40bc-877c-02998b4cf804", 00:17:41.473 "optimal_io_boundary": 0, 00:17:41.473 "md_size": 0, 00:17:41.473 "dif_type": 0, 00:17:41.473 "dif_is_head_of_md": false, 00:17:41.473 "dif_pi_format": 0 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "bdev_wait_for_examine" 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "scsi", 00:17:41.473 "config": null 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "scheduler", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "framework_set_scheduler", 00:17:41.473 "params": { 00:17:41.473 "name": "static" 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "vhost_scsi", 00:17:41.473 "config": [] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "vhost_blk", 00:17:41.473 "config": [] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "ublk", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "ublk_create_target", 00:17:41.473 "params": { 00:17:41.473 "cpumask": "1" 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "ublk_start_disk", 00:17:41.473 "params": { 00:17:41.473 "bdev_name": "malloc0", 00:17:41.473 "ublk_id": 0, 00:17:41.473 "num_queues": 1, 00:17:41.473 "queue_depth": 128 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "nbd", 00:17:41.473 "config": [] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "nvmf", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "nvmf_set_config", 00:17:41.473 "params": { 00:17:41.473 "discovery_filter": "match_any", 00:17:41.473 "admin_cmd_passthru": { 00:17:41.473 "identify_ctrlr": false 00:17:41.473 }, 00:17:41.473 "dhchap_digests": [ 00:17:41.473 "sha256", 00:17:41.473 "sha384", 00:17:41.473 "sha512" 00:17:41.473 ], 00:17:41.473 "dhchap_dhgroups": [ 00:17:41.473 "null", 00:17:41.473 "ffdhe2048", 00:17:41.473 "ffdhe3072", 00:17:41.473 "ffdhe4096", 00:17:41.473 "ffdhe6144", 00:17:41.473 "ffdhe8192" 00:17:41.473 ] 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "nvmf_set_max_subsystems", 00:17:41.473 "params": { 00:17:41.473 "max_subsystems": 1024 00:17:41.473 } 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "method": "nvmf_set_crdt", 00:17:41.473 "params": { 00:17:41.473 "crdt1": 0, 00:17:41.473 "crdt2": 0, 00:17:41.473 "crdt3": 0 00:17:41.473 } 00:17:41.473 } 00:17:41.473 ] 00:17:41.473 }, 00:17:41.473 { 00:17:41.473 "subsystem": "iscsi", 00:17:41.473 "config": [ 00:17:41.473 { 00:17:41.473 "method": "iscsi_set_options", 00:17:41.473 "params": { 00:17:41.473 "node_base": "iqn.2016-06.io.spdk", 00:17:41.473 "max_sessions": 128, 00:17:41.473 "max_connections_per_session": 2, 00:17:41.473 "max_queue_depth": 64, 00:17:41.473 "default_time2wait": 2, 00:17:41.473 "default_time2retain": 20, 00:17:41.473 "first_burst_length": 8192, 00:17:41.473 "immediate_data": true, 00:17:41.473 "allow_duplicated_isid": false, 00:17:41.473 "error_recovery_level": 0, 00:17:41.473 "nop_timeout": 60, 00:17:41.473 "nop_in_interval": 30, 00:17:41.473 "disable_chap": false, 00:17:41.473 "require_chap": false, 00:17:41.473 "mutual_chap": false, 00:17:41.473 "chap_group": 0, 00:17:41.473 "max_large_datain_per_connection": 64, 00:17:41.473 "max_r2t_per_connection": 4, 00:17:41.473 "pdu_pool_size": 36864, 00:17:41.474 "immediate_data_pool_size": 16384, 00:17:41.474 "data_out_pool_size": 2048 00:17:41.474 } 00:17:41.474 } 00:17:41.474 ] 00:17:41.474 } 00:17:41.474 ] 00:17:41.474 }' 00:17:41.474 22:08:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:41.474 [2024-12-06 22:08:14.169426] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:41.474 [2024-12-06 22:08:14.169546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73686 ] 00:17:41.474 [2024-12-06 22:08:14.327749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.734 [2024-12-06 22:08:14.439347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.676 [2024-12-06 22:08:15.307197] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:42.676 [2024-12-06 22:08:15.308133] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:42.676 [2024-12-06 22:08:15.315344] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:42.676 [2024-12-06 22:08:15.315438] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:42.676 [2024-12-06 22:08:15.315449] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:42.676 [2024-12-06 22:08:15.315457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:42.676 [2024-12-06 22:08:15.324320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:42.676 [2024-12-06 22:08:15.324352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:42.676 [2024-12-06 22:08:15.331215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:42.676 [2024-12-06 22:08:15.331335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:42.676 [2024-12-06 22:08:15.348197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:42.676 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.676 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73686 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73686 ']' 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73686 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73686 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.677 killing process with pid 73686 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73686' 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73686 00:17:42.677 22:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73686 00:17:44.059 [2024-12-06 22:08:16.692882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:44.059 [2024-12-06 22:08:16.723260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:44.059 [2024-12-06 22:08:16.723356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:44.059 [2024-12-06 22:08:16.732202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:44.059 [2024-12-06 22:08:16.732243] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:44.059 [2024-12-06 22:08:16.732249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:44.059 [2024-12-06 22:08:16.732270] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:44.059 [2024-12-06 22:08:16.732377] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:45.445 22:08:17 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:45.445 00:17:45.445 real 0m8.016s 00:17:45.445 user 0m5.226s 00:17:45.445 sys 0m3.435s 00:17:45.445 22:08:17 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.445 ************************************ 00:17:45.445 END TEST test_save_ublk_config 00:17:45.445 22:08:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:45.445 ************************************ 00:17:45.445 22:08:17 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73764 00:17:45.445 22:08:17 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.445 22:08:17 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73764 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@835 -- # '[' -z 73764 ']' 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.445 22:08:17 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.445 22:08:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:45.445 [2024-12-06 22:08:18.040965] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:17:45.445 [2024-12-06 22:08:18.041059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:17:45.445 [2024-12-06 22:08:18.190417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:45.445 [2024-12-06 22:08:18.268910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.445 [2024-12-06 22:08:18.268992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.018 22:08:18 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.018 22:08:18 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:46.018 22:08:18 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:46.018 22:08:18 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.018 22:08:18 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.018 22:08:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.018 ************************************ 00:17:46.018 START TEST test_create_ublk 00:17:46.018 ************************************ 00:17:46.018 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:46.018 22:08:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:46.018 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.018 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.281 [2024-12-06 22:08:18.894192] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:46.281 [2024-12-06 22:08:18.895762] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:46.281 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.281 22:08:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:46.281 22:08:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:46.281 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.281 22:08:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.281 [2024-12-06 22:08:19.058302] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:46.281 [2024-12-06 22:08:19.058597] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:46.281 [2024-12-06 22:08:19.058612] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:46.281 [2024-12-06 22:08:19.058617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:46.281 [2024-12-06 22:08:19.067367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:46.281 [2024-12-06 22:08:19.067385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:46.281 [2024-12-06 22:08:19.074204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:46.281 [2024-12-06 22:08:19.074684] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:46.281 [2024-12-06 22:08:19.089212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:46.281 22:08:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:46.281 { 00:17:46.281 "ublk_device": "/dev/ublkb0", 00:17:46.281 "id": 0, 00:17:46.281 "queue_depth": 512, 00:17:46.281 "num_queues": 4, 00:17:46.281 "bdev_name": "Malloc0" 00:17:46.281 } 00:17:46.281 ]' 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:46.281 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:46.542 22:08:19 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:46.542 22:08:19 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:46.543 22:08:19 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:46.543 fio: verification read phase will never start because write phase uses all of runtime 00:17:46.543 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:46.543 fio-3.35 00:17:46.543 Starting 1 process 00:17:58.763 00:17:58.763 fio_test: (groupid=0, jobs=1): err= 0: pid=73803: Fri Dec 6 22:08:29 2024 00:17:58.763 write: IOPS=17.0k, BW=66.3MiB/s (69.5MB/s)(663MiB/10001msec); 0 zone resets 00:17:58.763 clat (usec): min=32, max=4008, avg=58.20, stdev=84.81 00:17:58.763 lat (usec): min=32, max=4009, avg=58.60, stdev=84.82 00:17:58.763 clat percentiles (usec): 00:17:58.763 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 49], 20.00th=[ 52], 00:17:58.763 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:17:58.763 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 67], 00:17:58.763 | 99.00th=[ 77], 99.50th=[ 83], 99.90th=[ 1401], 99.95th=[ 2474], 00:17:58.763 | 99.99th=[ 3523] 00:17:58.763 bw ( KiB/s): min=61472, max=82272, per=100.00%, avg=68235.37, stdev=4432.60, samples=19 00:17:58.763 iops : min=15368, max=20568, avg=17058.84, stdev=1108.15, samples=19 00:17:58.763 lat (usec) : 50=13.44%, 100=86.28%, 250=0.13%, 500=0.01%, 750=0.01% 00:17:58.763 lat (usec) : 1000=0.01% 00:17:58.763 lat (msec) : 2=0.05%, 4=0.08%, 10=0.01% 00:17:58.763 cpu : usr=2.35%, sys=12.37%, ctx=169806, majf=0, minf=796 00:17:58.763 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.763 issued rwts: total=0,169806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.763 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.763 00:17:58.764 Run status group 0 (all jobs): 00:17:58.764 WRITE: bw=66.3MiB/s (69.5MB/s), 66.3MiB/s-66.3MiB/s (69.5MB/s-69.5MB/s), io=663MiB (696MB), run=10001-10001msec 00:17:58.764 00:17:58.764 Disk stats (read/write): 00:17:58.764 ublkb0: ios=0/168231, merge=0/0, ticks=0/8505, in_queue=8506, util=99.09% 00:17:58.764 22:08:29 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 [2024-12-06 22:08:29.503055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:58.764 [2024-12-06 22:08:29.536717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:58.764 [2024-12-06 22:08:29.537711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:58.764 [2024-12-06 22:08:29.546203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:58.764 [2024-12-06 22:08:29.546439] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:58.764 [2024-12-06 22:08:29.546448] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 [2024-12-06 22:08:29.562255] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:58.764 request: 00:17:58.764 { 00:17:58.764 "ublk_id": 0, 00:17:58.764 "method": "ublk_stop_disk", 00:17:58.764 "req_id": 1 00:17:58.764 } 00:17:58.764 Got JSON-RPC error response 00:17:58.764 response: 00:17:58.764 { 00:17:58.764 "code": -19, 00:17:58.764 "message": "No such device" 00:17:58.764 } 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.764 22:08:29 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 [2024-12-06 22:08:29.578252] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:58.764 [2024-12-06 22:08:29.586191] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:58.764 [2024-12-06 22:08:29.586222] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:58.764 22:08:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:58.764 22:08:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:58.764 22:08:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:58.764 22:08:29 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 22:08:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:58.764 22:08:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:58.764 ************************************ 00:17:58.764 END TEST test_create_ublk 00:17:58.764 ************************************ 00:17:58.764 22:08:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:58.764 00:17:58.764 real 0m11.156s 00:17:58.764 user 0m0.541s 00:17:58.764 sys 0m1.312s 00:17:58.764 22:08:30 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.764 22:08:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 22:08:30 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:58.764 22:08:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.764 22:08:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.764 22:08:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 ************************************ 00:17:58.764 START TEST test_create_multi_ublk 00:17:58.764 ************************************ 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 [2024-12-06 22:08:30.078189] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:58.764 [2024-12-06 22:08:30.080230] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.764 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 [2024-12-06 22:08:30.318296] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:58.765 [2024-12-06 22:08:30.318595] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:58.765 [2024-12-06 22:08:30.318607] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:58.765 [2024-12-06 22:08:30.318616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:58.765 [2024-12-06 22:08:30.338224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:58.765 [2024-12-06 22:08:30.338246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:58.765 [2024-12-06 22:08:30.350198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:58.765 [2024-12-06 22:08:30.350689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:58.765 [2024-12-06 22:08:30.374200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 [2024-12-06 22:08:30.589295] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:58.765 [2024-12-06 22:08:30.589594] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:58.765 [2024-12-06 22:08:30.589608] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:58.765 [2024-12-06 22:08:30.589613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:58.765 [2024-12-06 22:08:30.597215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:58.765 [2024-12-06 22:08:30.597234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:58.765 [2024-12-06 22:08:30.605200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:58.765 [2024-12-06 22:08:30.605691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:58.765 [2024-12-06 22:08:30.614228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 [2024-12-06 22:08:30.773277] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:58.765 [2024-12-06 22:08:30.773581] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:58.765 [2024-12-06 22:08:30.773593] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:58.765 [2024-12-06 22:08:30.773599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:58.765 [2024-12-06 22:08:30.781212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:58.765 [2024-12-06 22:08:30.781231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:58.765 [2024-12-06 22:08:30.789209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:58.765 [2024-12-06 22:08:30.789705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:58.765 [2024-12-06 22:08:30.810204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 [2024-12-06 22:08:30.969298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:58.765 [2024-12-06 22:08:30.969599] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:58.765 [2024-12-06 22:08:30.969611] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:58.765 [2024-12-06 22:08:30.969616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:58.765 [2024-12-06 22:08:30.977207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:58.765 [2024-12-06 22:08:30.977246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:58.765 [2024-12-06 22:08:30.985203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:58.765 [2024-12-06 22:08:30.985693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:58.765 [2024-12-06 22:08:30.989882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 22:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:58.765 { 00:17:58.765 "ublk_device": "/dev/ublkb0", 00:17:58.765 "id": 0, 00:17:58.765 "queue_depth": 512, 00:17:58.765 "num_queues": 4, 00:17:58.765 "bdev_name": "Malloc0" 00:17:58.765 }, 00:17:58.765 { 00:17:58.765 "ublk_device": "/dev/ublkb1", 00:17:58.765 "id": 1, 00:17:58.765 "queue_depth": 512, 00:17:58.765 "num_queues": 4, 00:17:58.765 "bdev_name": "Malloc1" 00:17:58.765 }, 00:17:58.765 { 00:17:58.765 "ublk_device": "/dev/ublkb2", 00:17:58.765 "id": 2, 00:17:58.765 "queue_depth": 512, 00:17:58.765 "num_queues": 4, 00:17:58.765 "bdev_name": "Malloc2" 00:17:58.765 }, 00:17:58.766 { 00:17:58.766 "ublk_device": "/dev/ublkb3", 00:17:58.766 "id": 3, 00:17:58.766 "queue_depth": 512, 00:17:58.766 "num_queues": 4, 00:17:58.766 "bdev_name": "Malloc3" 00:17:58.766 } 00:17:58.766 ]' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:58.766 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.029 [2024-12-06 22:08:31.645271] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.029 [2024-12-06 22:08:31.677200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.029 [2024-12-06 22:08:31.678105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.029 [2024-12-06 22:08:31.685206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.029 [2024-12-06 22:08:31.685446] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:59.029 [2024-12-06 22:08:31.685460] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.029 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.029 [2024-12-06 22:08:31.701265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.029 [2024-12-06 22:08:31.733237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.029 [2024-12-06 22:08:31.734016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.029 [2024-12-06 22:08:31.742231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.030 [2024-12-06 22:08:31.742485] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:59.030 [2024-12-06 22:08:31.742496] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.030 [2024-12-06 22:08:31.757259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.030 [2024-12-06 22:08:31.798705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.030 [2024-12-06 22:08:31.799791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.030 [2024-12-06 22:08:31.805199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.030 [2024-12-06 22:08:31.805429] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:59.030 [2024-12-06 22:08:31.805442] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.030 [2024-12-06 22:08:31.821261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.030 [2024-12-06 22:08:31.862678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.030 [2024-12-06 22:08:31.863667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.030 [2024-12-06 22:08:31.869205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.030 [2024-12-06 22:08:31.869440] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:59.030 [2024-12-06 22:08:31.869453] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.030 22:08:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:59.352 [2024-12-06 22:08:32.069253] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:59.352 [2024-12-06 22:08:32.077188] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:59.352 [2024-12-06 22:08:32.077217] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:59.352 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:59.352 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.352 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:59.352 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.352 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.643 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.643 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:59.643 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:59.643 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.643 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.210 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.210 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:00.210 22:08:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:00.210 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.210 22:08:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.468 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.468 22:08:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:00.468 22:08:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:00.468 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.468 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:00.727 ************************************ 00:18:00.727 END TEST test_create_multi_ublk 00:18:00.727 ************************************ 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:00.727 00:18:00.727 real 0m3.519s 00:18:00.727 user 0m0.799s 00:18:00.727 sys 0m0.148s 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.727 22:08:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 22:08:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:00.985 22:08:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:00.985 22:08:33 ublk -- ublk/ublk.sh@130 -- # killprocess 73764 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 73764 ']' 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@958 -- # kill -0 73764 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@959 -- # uname 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73764 00:18:00.985 killing process with pid 73764 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73764' 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@973 -- # kill 73764 00:18:00.985 22:08:33 ublk -- common/autotest_common.sh@978 -- # wait 73764 00:18:01.552 [2024-12-06 22:08:34.210186] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:01.552 [2024-12-06 22:08:34.210230] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:02.121 00:18:02.121 real 0m25.104s 00:18:02.121 user 0m34.923s 00:18:02.121 sys 0m10.185s 00:18:02.121 22:08:34 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.121 ************************************ 00:18:02.121 END TEST ublk 00:18:02.121 ************************************ 00:18:02.121 22:08:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.121 22:08:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:02.121 22:08:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.121 22:08:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.121 22:08:34 -- common/autotest_common.sh@10 -- # set +x 00:18:02.121 ************************************ 00:18:02.121 START TEST ublk_recovery 00:18:02.121 ************************************ 00:18:02.121 22:08:34 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:02.121 * Looking for test storage... 00:18:02.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:02.121 22:08:34 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.121 22:08:34 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.121 22:08:34 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.381 22:08:35 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.381 --rc genhtml_branch_coverage=1 00:18:02.381 --rc genhtml_function_coverage=1 00:18:02.381 --rc genhtml_legend=1 00:18:02.381 --rc geninfo_all_blocks=1 00:18:02.381 --rc geninfo_unexecuted_blocks=1 00:18:02.381 00:18:02.381 ' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.381 --rc genhtml_branch_coverage=1 00:18:02.381 --rc genhtml_function_coverage=1 00:18:02.381 --rc genhtml_legend=1 00:18:02.381 --rc geninfo_all_blocks=1 00:18:02.381 --rc geninfo_unexecuted_blocks=1 00:18:02.381 00:18:02.381 ' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.381 --rc genhtml_branch_coverage=1 00:18:02.381 --rc genhtml_function_coverage=1 00:18:02.381 --rc genhtml_legend=1 00:18:02.381 --rc geninfo_all_blocks=1 00:18:02.381 --rc geninfo_unexecuted_blocks=1 00:18:02.381 00:18:02.381 ' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.381 --rc genhtml_branch_coverage=1 00:18:02.381 --rc genhtml_function_coverage=1 00:18:02.381 --rc genhtml_legend=1 00:18:02.381 --rc geninfo_all_blocks=1 00:18:02.381 --rc geninfo_unexecuted_blocks=1 00:18:02.381 00:18:02.381 ' 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:02.381 22:08:35 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74151 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:02.381 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74151 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74151 ']' 00:18:02.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.381 22:08:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.381 [2024-12-06 22:08:35.117464] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:18:02.381 [2024-12-06 22:08:35.117694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74151 ] 00:18:02.641 [2024-12-06 22:08:35.268141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:02.641 [2024-12-06 22:08:35.346940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.641 [2024-12-06 22:08:35.347009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:03.212 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.212 [2024-12-06 22:08:35.968190] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:03.212 [2024-12-06 22:08:35.969743] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.212 22:08:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.212 22:08:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.212 malloc0 00:18:03.212 22:08:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.212 22:08:36 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:03.212 22:08:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.212 22:08:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.212 [2024-12-06 22:08:36.048336] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:03.212 [2024-12-06 22:08:36.048416] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:03.212 [2024-12-06 22:08:36.048424] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:03.212 [2024-12-06 22:08:36.048430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:03.212 [2024-12-06 22:08:36.057283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:03.212 [2024-12-06 22:08:36.057304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:03.212 [2024-12-06 22:08:36.064202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:03.212 [2024-12-06 22:08:36.064316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:03.474 [2024-12-06 22:08:36.087196] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:03.474 1 00:18:03.474 22:08:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.474 22:08:36 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:04.416 22:08:37 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74186 00:18:04.416 22:08:37 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:04.416 22:08:37 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:04.416 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:04.416 fio-3.35 00:18:04.416 Starting 1 process 00:18:09.686 22:08:42 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74151 00:18:09.686 22:08:42 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:14.970 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74151 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:14.970 22:08:47 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74297 00:18:14.970 22:08:47 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.970 22:08:47 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74297 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74297 ']' 00:18:14.970 22:08:47 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.970 22:08:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.970 [2024-12-06 22:08:47.208736] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:18:14.970 [2024-12-06 22:08:47.209193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74297 ] 00:18:14.970 [2024-12-06 22:08:47.379333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.970 [2024-12-06 22:08:47.511337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.970 [2024-12-06 22:08:47.511434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:15.539 22:08:48 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.539 [2024-12-06 22:08:48.319203] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:15.539 [2024-12-06 22:08:48.321803] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.539 22:08:48 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.539 22:08:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.797 malloc0 00:18:15.797 22:08:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.797 22:08:48 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:15.797 22:08:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.797 22:08:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.797 [2024-12-06 22:08:48.439374] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:15.797 [2024-12-06 22:08:48.439409] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:15.797 [2024-12-06 22:08:48.439419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:15.797 [2024-12-06 22:08:48.447225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:15.797 [2024-12-06 22:08:48.447249] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:15.797 1 00:18:15.797 22:08:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.797 22:08:48 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74186 00:18:16.728 [2024-12-06 22:08:49.447280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:16.728 [2024-12-06 22:08:49.455193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:16.728 [2024-12-06 22:08:49.455209] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:17.657 [2024-12-06 22:08:50.455234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:17.657 [2024-12-06 22:08:50.459194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:17.657 [2024-12-06 22:08:50.459208] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:19.025 [2024-12-06 22:08:51.459228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:19.025 [2024-12-06 22:08:51.467201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:19.025 [2024-12-06 22:08:51.467216] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:19.025 [2024-12-06 22:08:51.467225] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:19.025 [2024-12-06 22:08:51.467297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:40.939 [2024-12-06 22:09:12.539197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:40.939 [2024-12-06 22:09:12.545758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:40.939 [2024-12-06 22:09:12.553357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:40.939 [2024-12-06 22:09:12.553378] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:07.467 00:19:07.467 fio_test: (groupid=0, jobs=1): err= 0: pid=74195: Fri Dec 6 22:09:37 2024 00:19:07.467 read: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(3242MiB/60002msec) 00:19:07.467 slat (nsec): min=1188, max=139364, avg=5130.19, stdev=1550.28 00:19:07.467 clat (usec): min=993, max=30462k, avg=4362.79, stdev=254614.67 00:19:07.467 lat (usec): min=998, max=30462k, avg=4367.93, stdev=254614.67 00:19:07.467 clat percentiles (usec): 00:19:07.467 | 1.00th=[ 1811], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 1975], 00:19:07.467 | 30.00th=[ 1991], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2089], 00:19:07.467 | 70.00th=[ 2180], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 3130], 00:19:07.467 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 8029], 00:19:07.467 | 99.99th=[12649] 00:19:07.467 bw ( KiB/s): min=32304, max=122520, per=100.00%, avg=111003.31, stdev=16407.92, samples=59 00:19:07.467 iops : min= 8076, max=30630, avg=27750.81, stdev=4101.99, samples=59 00:19:07.467 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(3238MiB/60002msec); 0 zone resets 00:19:07.467 slat (nsec): min=1140, max=134638, avg=5182.18, stdev=1589.04 00:19:07.467 clat (usec): min=913, max=30462k, avg=4885.82, stdev=279897.38 00:19:07.467 lat (usec): min=918, max=30462k, avg=4891.00, stdev=279897.39 00:19:07.467 clat percentiles (usec): 00:19:07.467 | 1.00th=[ 1860], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2057], 00:19:07.467 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:19:07.467 | 70.00th=[ 2278], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 3097], 00:19:07.467 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7046], 99.95th=[ 8094], 00:19:07.467 | 99.99th=[12780] 00:19:07.467 bw ( KiB/s): min=32440, max=121568, per=100.00%, avg=110868.73, stdev=16258.22, samples=59 00:19:07.467 iops : min= 8110, max=30392, avg=27717.17, stdev=4064.56, samples=59 00:19:07.467 lat (usec) : 1000=0.01% 00:19:07.467 lat (msec) : 2=18.63%, 4=78.44%, 10=2.89%, 20=0.02%, >=2000=0.01% 00:19:07.467 cpu : usr=3.23%, sys=14.72%, ctx=55263, majf=0, minf=13 00:19:07.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:07.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.467 issued rwts: total=829969,828886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.467 00:19:07.467 Run status group 0 (all jobs): 00:19:07.467 READ: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=3242MiB (3400MB), run=60002-60002msec 00:19:07.467 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=3238MiB (3395MB), run=60002-60002msec 00:19:07.467 00:19:07.467 Disk stats (read/write): 00:19:07.467 ublkb1: ios=827325/826165, merge=0/0, ticks=3560799/3920732, in_queue=7481532, util=99.91% 00:19:07.467 22:09:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 [2024-12-06 22:09:37.357501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:07.467 [2024-12-06 22:09:37.412292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:07.467 [2024-12-06 22:09:37.412447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:07.467 [2024-12-06 22:09:37.419200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:07.467 [2024-12-06 22:09:37.419291] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:07.467 [2024-12-06 22:09:37.419297] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.467 22:09:37 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 [2024-12-06 22:09:37.432279] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:07.467 [2024-12-06 22:09:37.443190] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:07.467 [2024-12-06 22:09:37.443220] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.467 22:09:37 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:07.467 22:09:37 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:07.467 22:09:37 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74297 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74297 ']' 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74297 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74297 00:19:07.467 killing process with pid 74297 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74297' 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74297 00:19:07.467 22:09:37 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74297 00:19:07.467 [2024-12-06 22:09:38.557284] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:07.467 [2024-12-06 22:09:38.557336] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:07.467 00:19:07.467 real 1m4.515s 00:19:07.467 user 1m47.000s 00:19:07.467 sys 0m22.094s 00:19:07.467 22:09:39 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.467 ************************************ 00:19:07.467 END TEST ublk_recovery 00:19:07.467 ************************************ 00:19:07.467 22:09:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 22:09:39 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:07.467 22:09:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:07.467 22:09:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.467 22:09:39 -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 22:09:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:07.467 22:09:39 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:07.467 22:09:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.467 22:09:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.467 22:09:39 -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 ************************************ 00:19:07.467 START TEST ftl 00:19:07.467 ************************************ 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:07.467 * Looking for test storage... 00:19:07.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.467 22:09:39 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.467 22:09:39 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.467 22:09:39 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.467 22:09:39 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.467 22:09:39 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.467 22:09:39 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:07.467 22:09:39 ftl -- scripts/common.sh@345 -- # : 1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.467 22:09:39 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.467 22:09:39 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@353 -- # local d=1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.467 22:09:39 ftl -- scripts/common.sh@355 -- # echo 1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.467 22:09:39 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@353 -- # local d=2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.467 22:09:39 ftl -- scripts/common.sh@355 -- # echo 2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.467 22:09:39 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.467 22:09:39 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.467 22:09:39 ftl -- scripts/common.sh@368 -- # return 0 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.467 --rc genhtml_branch_coverage=1 00:19:07.467 --rc genhtml_function_coverage=1 00:19:07.467 --rc genhtml_legend=1 00:19:07.467 --rc geninfo_all_blocks=1 00:19:07.467 --rc geninfo_unexecuted_blocks=1 00:19:07.467 00:19:07.467 ' 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.467 --rc genhtml_branch_coverage=1 00:19:07.467 --rc genhtml_function_coverage=1 00:19:07.467 --rc genhtml_legend=1 00:19:07.467 --rc geninfo_all_blocks=1 00:19:07.467 --rc geninfo_unexecuted_blocks=1 00:19:07.467 00:19:07.467 ' 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.467 --rc genhtml_branch_coverage=1 00:19:07.467 --rc genhtml_function_coverage=1 00:19:07.467 --rc genhtml_legend=1 00:19:07.467 --rc geninfo_all_blocks=1 00:19:07.467 --rc geninfo_unexecuted_blocks=1 00:19:07.467 00:19:07.467 ' 00:19:07.467 22:09:39 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.467 --rc genhtml_branch_coverage=1 00:19:07.467 --rc genhtml_function_coverage=1 00:19:07.467 --rc genhtml_legend=1 00:19:07.467 --rc geninfo_all_blocks=1 00:19:07.467 --rc geninfo_unexecuted_blocks=1 00:19:07.467 00:19:07.467 ' 00:19:07.467 22:09:39 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:07.467 22:09:39 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:07.467 22:09:39 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.467 22:09:39 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.467 22:09:39 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:07.468 22:09:39 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:07.468 22:09:39 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.468 22:09:39 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.468 22:09:39 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.468 22:09:39 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:07.468 22:09:39 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:07.468 22:09:39 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:07.468 22:09:39 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:07.468 22:09:39 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.468 22:09:39 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.468 22:09:39 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:07.468 22:09:39 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:07.468 22:09:39 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:07.468 22:09:39 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:07.468 22:09:39 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:07.468 22:09:39 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:07.468 22:09:39 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:07.468 22:09:39 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.468 22:09:39 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:07.468 22:09:39 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:07.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.468 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:07.468 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:07.468 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:07.468 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:07.468 22:09:40 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:07.468 22:09:40 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75102 00:19:07.468 22:09:40 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75102 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 75102 ']' 00:19:07.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.468 22:09:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:07.468 [2024-12-06 22:09:40.278130] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:19:07.468 [2024-12-06 22:09:40.278293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75102 ] 00:19:07.729 [2024-12-06 22:09:40.438645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.729 [2024-12-06 22:09:40.525826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.299 22:09:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.299 22:09:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:08.299 22:09:41 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:08.558 22:09:41 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:09.129 22:09:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:09.129 22:09:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:09.700 22:09:42 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:09.700 22:09:42 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:09.700 22:09:42 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@50 -- # break 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@63 -- # break 00:19:09.961 22:09:42 ftl -- ftl/ftl.sh@66 -- # killprocess 75102 00:19:09.961 22:09:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 75102 ']' 00:19:09.961 22:09:42 ftl -- common/autotest_common.sh@958 -- # kill -0 75102 00:19:09.961 22:09:42 ftl -- common/autotest_common.sh@959 -- # uname 00:19:09.961 22:09:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.961 22:09:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75102 00:19:10.221 22:09:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.221 killing process with pid 75102 00:19:10.221 22:09:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.221 22:09:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75102' 00:19:10.221 22:09:42 ftl -- common/autotest_common.sh@973 -- # kill 75102 00:19:10.221 22:09:42 ftl -- common/autotest_common.sh@978 -- # wait 75102 00:19:11.185 22:09:43 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:11.185 22:09:43 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:11.185 22:09:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:11.185 22:09:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.185 22:09:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:11.185 ************************************ 00:19:11.185 START TEST ftl_fio_basic 00:19:11.185 ************************************ 00:19:11.185 22:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:11.481 * Looking for test storage... 00:19:11.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.481 --rc genhtml_branch_coverage=1 00:19:11.481 --rc genhtml_function_coverage=1 00:19:11.481 --rc genhtml_legend=1 00:19:11.481 --rc geninfo_all_blocks=1 00:19:11.481 --rc geninfo_unexecuted_blocks=1 00:19:11.481 00:19:11.481 ' 00:19:11.481 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:11.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.481 --rc genhtml_branch_coverage=1 00:19:11.481 --rc genhtml_function_coverage=1 00:19:11.481 --rc genhtml_legend=1 00:19:11.481 --rc geninfo_all_blocks=1 00:19:11.482 --rc geninfo_unexecuted_blocks=1 00:19:11.482 00:19:11.482 ' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.482 --rc genhtml_branch_coverage=1 00:19:11.482 --rc genhtml_function_coverage=1 00:19:11.482 --rc genhtml_legend=1 00:19:11.482 --rc geninfo_all_blocks=1 00:19:11.482 --rc geninfo_unexecuted_blocks=1 00:19:11.482 00:19:11.482 ' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.482 --rc genhtml_branch_coverage=1 00:19:11.482 --rc genhtml_function_coverage=1 00:19:11.482 --rc genhtml_legend=1 00:19:11.482 --rc geninfo_all_blocks=1 00:19:11.482 --rc geninfo_unexecuted_blocks=1 00:19:11.482 00:19:11.482 ' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75234 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75234 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75234 ']' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.482 22:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:11.482 [2024-12-06 22:09:44.243395] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:19:11.482 [2024-12-06 22:09:44.243518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75234 ] 00:19:11.755 [2024-12-06 22:09:44.393100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.755 [2024-12-06 22:09:44.468774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.755 [2024-12-06 22:09:44.469374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.755 [2024-12-06 22:09:44.469412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:12.335 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:12.602 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:12.863 { 00:19:12.863 "name": "nvme0n1", 00:19:12.863 "aliases": [ 00:19:12.863 "57742ad5-4b5a-423b-90ef-bddce80da265" 00:19:12.863 ], 00:19:12.863 "product_name": "NVMe disk", 00:19:12.863 "block_size": 4096, 00:19:12.863 "num_blocks": 1310720, 00:19:12.863 "uuid": "57742ad5-4b5a-423b-90ef-bddce80da265", 00:19:12.863 "numa_id": -1, 00:19:12.863 "assigned_rate_limits": { 00:19:12.863 "rw_ios_per_sec": 0, 00:19:12.863 "rw_mbytes_per_sec": 0, 00:19:12.863 "r_mbytes_per_sec": 0, 00:19:12.863 "w_mbytes_per_sec": 0 00:19:12.863 }, 00:19:12.863 "claimed": false, 00:19:12.863 "zoned": false, 00:19:12.863 "supported_io_types": { 00:19:12.863 "read": true, 00:19:12.863 "write": true, 00:19:12.863 "unmap": true, 00:19:12.863 "flush": true, 00:19:12.863 "reset": true, 00:19:12.863 "nvme_admin": true, 00:19:12.863 "nvme_io": true, 00:19:12.863 "nvme_io_md": false, 00:19:12.863 "write_zeroes": true, 00:19:12.863 "zcopy": false, 00:19:12.863 "get_zone_info": false, 00:19:12.863 "zone_management": false, 00:19:12.863 "zone_append": false, 00:19:12.863 "compare": true, 00:19:12.863 "compare_and_write": false, 00:19:12.863 "abort": true, 00:19:12.863 "seek_hole": false, 00:19:12.863 "seek_data": false, 00:19:12.863 "copy": true, 00:19:12.863 "nvme_iov_md": false 00:19:12.863 }, 00:19:12.863 "driver_specific": { 00:19:12.863 "nvme": [ 00:19:12.863 { 00:19:12.863 "pci_address": "0000:00:11.0", 00:19:12.863 "trid": { 00:19:12.863 "trtype": "PCIe", 00:19:12.863 "traddr": "0000:00:11.0" 00:19:12.863 }, 00:19:12.863 "ctrlr_data": { 00:19:12.863 "cntlid": 0, 00:19:12.863 "vendor_id": "0x1b36", 00:19:12.863 "model_number": "QEMU NVMe Ctrl", 00:19:12.863 "serial_number": "12341", 00:19:12.863 "firmware_revision": "8.0.0", 00:19:12.863 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:12.863 "oacs": { 00:19:12.863 "security": 0, 00:19:12.863 "format": 1, 00:19:12.863 "firmware": 0, 00:19:12.863 "ns_manage": 1 00:19:12.863 }, 00:19:12.863 "multi_ctrlr": false, 00:19:12.863 "ana_reporting": false 00:19:12.863 }, 00:19:12.863 "vs": { 00:19:12.863 "nvme_version": "1.4" 00:19:12.863 }, 00:19:12.863 "ns_data": { 00:19:12.863 "id": 1, 00:19:12.863 "can_share": false 00:19:12.863 } 00:19:12.863 } 00:19:12.863 ], 00:19:12.863 "mp_policy": "active_passive" 00:19:12.863 } 00:19:12.863 } 00:19:12.863 ]' 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:12.863 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:13.122 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:13.122 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:13.122 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ae200f18-f8e2-4eaf-a258-b083b945b9ec 00:19:13.122 22:09:45 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ae200f18-f8e2-4eaf-a258-b083b945b9ec 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:13.383 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:13.644 { 00:19:13.644 "name": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:13.644 "aliases": [ 00:19:13.644 "lvs/nvme0n1p0" 00:19:13.644 ], 00:19:13.644 "product_name": "Logical Volume", 00:19:13.644 "block_size": 4096, 00:19:13.644 "num_blocks": 26476544, 00:19:13.644 "uuid": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:13.644 "assigned_rate_limits": { 00:19:13.644 "rw_ios_per_sec": 0, 00:19:13.644 "rw_mbytes_per_sec": 0, 00:19:13.644 "r_mbytes_per_sec": 0, 00:19:13.644 "w_mbytes_per_sec": 0 00:19:13.644 }, 00:19:13.644 "claimed": false, 00:19:13.644 "zoned": false, 00:19:13.644 "supported_io_types": { 00:19:13.644 "read": true, 00:19:13.644 "write": true, 00:19:13.644 "unmap": true, 00:19:13.644 "flush": false, 00:19:13.644 "reset": true, 00:19:13.644 "nvme_admin": false, 00:19:13.644 "nvme_io": false, 00:19:13.644 "nvme_io_md": false, 00:19:13.644 "write_zeroes": true, 00:19:13.644 "zcopy": false, 00:19:13.644 "get_zone_info": false, 00:19:13.644 "zone_management": false, 00:19:13.644 "zone_append": false, 00:19:13.644 "compare": false, 00:19:13.644 "compare_and_write": false, 00:19:13.644 "abort": false, 00:19:13.644 "seek_hole": true, 00:19:13.644 "seek_data": true, 00:19:13.644 "copy": false, 00:19:13.644 "nvme_iov_md": false 00:19:13.644 }, 00:19:13.644 "driver_specific": { 00:19:13.644 "lvol": { 00:19:13.644 "lvol_store_uuid": "ae200f18-f8e2-4eaf-a258-b083b945b9ec", 00:19:13.644 "base_bdev": "nvme0n1", 00:19:13.644 "thin_provision": true, 00:19:13.644 "num_allocated_clusters": 0, 00:19:13.644 "snapshot": false, 00:19:13.644 "clone": false, 00:19:13.644 "esnap_clone": false 00:19:13.644 } 00:19:13.644 } 00:19:13.644 } 00:19:13.644 ]' 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:13.644 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:13.903 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:14.161 { 00:19:14.161 "name": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:14.161 "aliases": [ 00:19:14.161 "lvs/nvme0n1p0" 00:19:14.161 ], 00:19:14.161 "product_name": "Logical Volume", 00:19:14.161 "block_size": 4096, 00:19:14.161 "num_blocks": 26476544, 00:19:14.161 "uuid": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:14.161 "assigned_rate_limits": { 00:19:14.161 "rw_ios_per_sec": 0, 00:19:14.161 "rw_mbytes_per_sec": 0, 00:19:14.161 "r_mbytes_per_sec": 0, 00:19:14.161 "w_mbytes_per_sec": 0 00:19:14.161 }, 00:19:14.161 "claimed": false, 00:19:14.161 "zoned": false, 00:19:14.161 "supported_io_types": { 00:19:14.161 "read": true, 00:19:14.161 "write": true, 00:19:14.161 "unmap": true, 00:19:14.161 "flush": false, 00:19:14.161 "reset": true, 00:19:14.161 "nvme_admin": false, 00:19:14.161 "nvme_io": false, 00:19:14.161 "nvme_io_md": false, 00:19:14.161 "write_zeroes": true, 00:19:14.161 "zcopy": false, 00:19:14.161 "get_zone_info": false, 00:19:14.161 "zone_management": false, 00:19:14.161 "zone_append": false, 00:19:14.161 "compare": false, 00:19:14.161 "compare_and_write": false, 00:19:14.161 "abort": false, 00:19:14.161 "seek_hole": true, 00:19:14.161 "seek_data": true, 00:19:14.161 "copy": false, 00:19:14.161 "nvme_iov_md": false 00:19:14.161 }, 00:19:14.161 "driver_specific": { 00:19:14.161 "lvol": { 00:19:14.161 "lvol_store_uuid": "ae200f18-f8e2-4eaf-a258-b083b945b9ec", 00:19:14.161 "base_bdev": "nvme0n1", 00:19:14.161 "thin_provision": true, 00:19:14.161 "num_allocated_clusters": 0, 00:19:14.161 "snapshot": false, 00:19:14.161 "clone": false, 00:19:14.161 "esnap_clone": false 00:19:14.161 } 00:19:14.161 } 00:19:14.161 } 00:19:14.161 ]' 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:14.161 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:14.162 22:09:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:14.162 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:14.162 22:09:46 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:14.420 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:14.420 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:14.678 { 00:19:14.678 "name": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:14.678 "aliases": [ 00:19:14.678 "lvs/nvme0n1p0" 00:19:14.678 ], 00:19:14.678 "product_name": "Logical Volume", 00:19:14.678 "block_size": 4096, 00:19:14.678 "num_blocks": 26476544, 00:19:14.678 "uuid": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:14.678 "assigned_rate_limits": { 00:19:14.678 "rw_ios_per_sec": 0, 00:19:14.678 "rw_mbytes_per_sec": 0, 00:19:14.678 "r_mbytes_per_sec": 0, 00:19:14.678 "w_mbytes_per_sec": 0 00:19:14.678 }, 00:19:14.678 "claimed": false, 00:19:14.678 "zoned": false, 00:19:14.678 "supported_io_types": { 00:19:14.678 "read": true, 00:19:14.678 "write": true, 00:19:14.678 "unmap": true, 00:19:14.678 "flush": false, 00:19:14.678 "reset": true, 00:19:14.678 "nvme_admin": false, 00:19:14.678 "nvme_io": false, 00:19:14.678 "nvme_io_md": false, 00:19:14.678 "write_zeroes": true, 00:19:14.678 "zcopy": false, 00:19:14.678 "get_zone_info": false, 00:19:14.678 "zone_management": false, 00:19:14.678 "zone_append": false, 00:19:14.678 "compare": false, 00:19:14.678 "compare_and_write": false, 00:19:14.678 "abort": false, 00:19:14.678 "seek_hole": true, 00:19:14.678 "seek_data": true, 00:19:14.678 "copy": false, 00:19:14.678 "nvme_iov_md": false 00:19:14.678 }, 00:19:14.678 "driver_specific": { 00:19:14.678 "lvol": { 00:19:14.678 "lvol_store_uuid": "ae200f18-f8e2-4eaf-a258-b083b945b9ec", 00:19:14.678 "base_bdev": "nvme0n1", 00:19:14.678 "thin_provision": true, 00:19:14.678 "num_allocated_clusters": 0, 00:19:14.678 "snapshot": false, 00:19:14.678 "clone": false, 00:19:14.678 "esnap_clone": false 00:19:14.678 } 00:19:14.678 } 00:19:14.678 } 00:19:14.678 ]' 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:14.678 22:09:47 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35d6a190-6668-49e7-9e2a-f99da6f4a6b9 -c nvc0n1p0 --l2p_dram_limit 60 00:19:14.937 [2024-12-06 22:09:47.580832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.580868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.937 [2024-12-06 22:09:47.580881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.937 [2024-12-06 22:09:47.580887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.580937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.580946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.937 [2024-12-06 22:09:47.580959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:14.937 [2024-12-06 22:09:47.580965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.580998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.937 [2024-12-06 22:09:47.581604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.937 [2024-12-06 22:09:47.581619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.581625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.937 [2024-12-06 22:09:47.581632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:19:14.937 [2024-12-06 22:09:47.581638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.581670] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID daef629d-1555-4c70-a65e-fbe011bc1099 00:19:14.937 [2024-12-06 22:09:47.582685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.582709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:14.937 [2024-12-06 22:09:47.582716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:14.937 [2024-12-06 22:09:47.582724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.587844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.587871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.937 [2024-12-06 22:09:47.587879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:19:14.937 [2024-12-06 22:09:47.587886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.587969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.587978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.937 [2024-12-06 22:09:47.587985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:14.937 [2024-12-06 22:09:47.587994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.588038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.588048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.937 [2024-12-06 22:09:47.588054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:14.937 [2024-12-06 22:09:47.588060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.588086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.937 [2024-12-06 22:09:47.590940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.590963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.937 [2024-12-06 22:09:47.590973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:19:14.937 [2024-12-06 22:09:47.590981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.591018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.937 [2024-12-06 22:09:47.591024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.937 [2024-12-06 22:09:47.591031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:14.937 [2024-12-06 22:09:47.591037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.937 [2024-12-06 22:09:47.591063] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:14.938 [2024-12-06 22:09:47.591189] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.938 [2024-12-06 22:09:47.591202] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.938 [2024-12-06 22:09:47.591211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:14.938 [2024-12-06 22:09:47.591220] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591226] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591235] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:14.938 [2024-12-06 22:09:47.591241] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.938 [2024-12-06 22:09:47.591248] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.938 [2024-12-06 22:09:47.591253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.938 [2024-12-06 22:09:47.591260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.938 [2024-12-06 22:09:47.591268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.938 [2024-12-06 22:09:47.591275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:14.938 [2024-12-06 22:09:47.591281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.938 [2024-12-06 22:09:47.591354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.938 [2024-12-06 22:09:47.591360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.938 [2024-12-06 22:09:47.591368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:14.938 [2024-12-06 22:09:47.591373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.938 [2024-12-06 22:09:47.591467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.938 [2024-12-06 22:09:47.591474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.938 [2024-12-06 22:09:47.591483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.938 [2024-12-06 22:09:47.591502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.938 [2024-12-06 22:09:47.591521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.938 [2024-12-06 22:09:47.591532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.938 [2024-12-06 22:09:47.591539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:14.938 [2024-12-06 22:09:47.591546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.938 [2024-12-06 22:09:47.591551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.938 [2024-12-06 22:09:47.591558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:14.938 [2024-12-06 22:09:47.591563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.938 [2024-12-06 22:09:47.591579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.938 [2024-12-06 22:09:47.591596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.938 [2024-12-06 22:09:47.591613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.938 [2024-12-06 22:09:47.591630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.938 [2024-12-06 22:09:47.591647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.938 [2024-12-06 22:09:47.591665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.938 [2024-12-06 22:09:47.591688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.938 [2024-12-06 22:09:47.591693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:14.938 [2024-12-06 22:09:47.591699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.938 [2024-12-06 22:09:47.591704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.938 [2024-12-06 22:09:47.591710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:14.938 [2024-12-06 22:09:47.591714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.938 [2024-12-06 22:09:47.591725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:14.938 [2024-12-06 22:09:47.591731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591736] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.938 [2024-12-06 22:09:47.591743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.938 [2024-12-06 22:09:47.591749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.938 [2024-12-06 22:09:47.591762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.938 [2024-12-06 22:09:47.591771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.938 [2024-12-06 22:09:47.591777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.938 [2024-12-06 22:09:47.591783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.938 [2024-12-06 22:09:47.591788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.938 [2024-12-06 22:09:47.591794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.938 [2024-12-06 22:09:47.591801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.938 [2024-12-06 22:09:47.591809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:14.938 [2024-12-06 22:09:47.591823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:14.938 [2024-12-06 22:09:47.591828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:14.938 [2024-12-06 22:09:47.591842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:14.938 [2024-12-06 22:09:47.591847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:14.938 [2024-12-06 22:09:47.591855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:14.938 [2024-12-06 22:09:47.591861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:14.938 [2024-12-06 22:09:47.591867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:14.938 [2024-12-06 22:09:47.591872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:14.938 [2024-12-06 22:09:47.591881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:14.938 [2024-12-06 22:09:47.591911] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.938 [2024-12-06 22:09:47.591918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.938 [2024-12-06 22:09:47.591932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.938 [2024-12-06 22:09:47.591937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.938 [2024-12-06 22:09:47.591945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.938 [2024-12-06 22:09:47.591951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.938 [2024-12-06 22:09:47.591958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.938 [2024-12-06 22:09:47.591964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:19:14.939 [2024-12-06 22:09:47.591971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.939 [2024-12-06 22:09:47.592037] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:14.939 [2024-12-06 22:09:47.592050] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:18.220 [2024-12-06 22:09:50.469735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.469803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:18.220 [2024-12-06 22:09:50.469819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2877.681 ms 00:19:18.220 [2024-12-06 22:09:50.469829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.495692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.495740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.220 [2024-12-06 22:09:50.495753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.650 ms 00:19:18.220 [2024-12-06 22:09:50.495763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.495904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.495917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.220 [2024-12-06 22:09:50.495926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:18.220 [2024-12-06 22:09:50.495938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.540212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.540257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.220 [2024-12-06 22:09:50.540272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.213 ms 00:19:18.220 [2024-12-06 22:09:50.540283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.540322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.540333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.220 [2024-12-06 22:09:50.540341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.220 [2024-12-06 22:09:50.540350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.540709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.540737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.220 [2024-12-06 22:09:50.540746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:18.220 [2024-12-06 22:09:50.540757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.220 [2024-12-06 22:09:50.540876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.220 [2024-12-06 22:09:50.540887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.220 [2024-12-06 22:09:50.540895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:18.220 [2024-12-06 22:09:50.540906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.555439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.555474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.221 [2024-12-06 22:09:50.555484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:19:18.221 [2024-12-06 22:09:50.555493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.566922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:18.221 [2024-12-06 22:09:50.581592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.581628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.221 [2024-12-06 22:09:50.581643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.011 ms 00:19:18.221 [2024-12-06 22:09:50.581651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.636734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.636773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:18.221 [2024-12-06 22:09:50.636790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.046 ms 00:19:18.221 [2024-12-06 22:09:50.636797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.636983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.636993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:18.221 [2024-12-06 22:09:50.637006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:18.221 [2024-12-06 22:09:50.637013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.659664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.659699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:18.221 [2024-12-06 22:09:50.659712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:19:18.221 [2024-12-06 22:09:50.659720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.682268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.682299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:18.221 [2024-12-06 22:09:50.682312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.502 ms 00:19:18.221 [2024-12-06 22:09:50.682320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.682894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.682916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.221 [2024-12-06 22:09:50.682927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:18.221 [2024-12-06 22:09:50.682934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.748440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.748475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:18.221 [2024-12-06 22:09:50.748490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.468 ms 00:19:18.221 [2024-12-06 22:09:50.748500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.772656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.772690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:18.221 [2024-12-06 22:09:50.772704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.067 ms 00:19:18.221 [2024-12-06 22:09:50.772712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.795287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.795321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:18.221 [2024-12-06 22:09:50.795333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.530 ms 00:19:18.221 [2024-12-06 22:09:50.795341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.818491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.818527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:18.221 [2024-12-06 22:09:50.818540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.101 ms 00:19:18.221 [2024-12-06 22:09:50.818548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.818599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.818608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:18.221 [2024-12-06 22:09:50.818622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.221 [2024-12-06 22:09:50.818630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.818712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.221 [2024-12-06 22:09:50.818722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:18.221 [2024-12-06 22:09:50.818732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:18.221 [2024-12-06 22:09:50.818739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.221 [2024-12-06 22:09:50.819665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3238.397 ms, result 0 00:19:18.221 { 00:19:18.221 "name": "ftl0", 00:19:18.221 "uuid": "daef629d-1555-4c70-a65e-fbe011bc1099" 00:19:18.221 } 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:18.221 22:09:50 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:18.221 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:18.480 [ 00:19:18.480 { 00:19:18.480 "name": "ftl0", 00:19:18.480 "aliases": [ 00:19:18.480 "daef629d-1555-4c70-a65e-fbe011bc1099" 00:19:18.480 ], 00:19:18.480 "product_name": "FTL disk", 00:19:18.480 "block_size": 4096, 00:19:18.480 "num_blocks": 20971520, 00:19:18.480 "uuid": "daef629d-1555-4c70-a65e-fbe011bc1099", 00:19:18.480 "assigned_rate_limits": { 00:19:18.480 "rw_ios_per_sec": 0, 00:19:18.480 "rw_mbytes_per_sec": 0, 00:19:18.480 "r_mbytes_per_sec": 0, 00:19:18.480 "w_mbytes_per_sec": 0 00:19:18.480 }, 00:19:18.480 "claimed": false, 00:19:18.480 "zoned": false, 00:19:18.480 "supported_io_types": { 00:19:18.480 "read": true, 00:19:18.480 "write": true, 00:19:18.480 "unmap": true, 00:19:18.480 "flush": true, 00:19:18.480 "reset": false, 00:19:18.480 "nvme_admin": false, 00:19:18.480 "nvme_io": false, 00:19:18.480 "nvme_io_md": false, 00:19:18.480 "write_zeroes": true, 00:19:18.480 "zcopy": false, 00:19:18.480 "get_zone_info": false, 00:19:18.480 "zone_management": false, 00:19:18.480 "zone_append": false, 00:19:18.480 "compare": false, 00:19:18.480 "compare_and_write": false, 00:19:18.480 "abort": false, 00:19:18.480 "seek_hole": false, 00:19:18.480 "seek_data": false, 00:19:18.480 "copy": false, 00:19:18.480 "nvme_iov_md": false 00:19:18.480 }, 00:19:18.480 "driver_specific": { 00:19:18.480 "ftl": { 00:19:18.480 "base_bdev": "35d6a190-6668-49e7-9e2a-f99da6f4a6b9", 00:19:18.480 "cache": "nvc0n1p0" 00:19:18.480 } 00:19:18.480 } 00:19:18.480 } 00:19:18.480 ] 00:19:18.480 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:18.480 22:09:51 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:18.480 22:09:51 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:18.737 22:09:51 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:18.737 22:09:51 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:18.996 [2024-12-06 22:09:51.624693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.624740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:18.996 [2024-12-06 22:09:51.624753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.996 [2024-12-06 22:09:51.624763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.624800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:18.996 [2024-12-06 22:09:51.627380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.627410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:18.996 [2024-12-06 22:09:51.627422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:19:18.996 [2024-12-06 22:09:51.627431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.627893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.627914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:18.996 [2024-12-06 22:09:51.627925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:19:18.996 [2024-12-06 22:09:51.627932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.631165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.631195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:18.996 [2024-12-06 22:09:51.631207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:19:18.996 [2024-12-06 22:09:51.631215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.637380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.637409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:18.996 [2024-12-06 22:09:51.637422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:19:18.996 [2024-12-06 22:09:51.637431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.661279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.661311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:18.996 [2024-12-06 22:09:51.661335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.758 ms 00:19:18.996 [2024-12-06 22:09:51.661343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.676220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.676254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:18.996 [2024-12-06 22:09:51.676270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.832 ms 00:19:18.996 [2024-12-06 22:09:51.676279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.676477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.676492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:18.996 [2024-12-06 22:09:51.676503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:18.996 [2024-12-06 22:09:51.676510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.699422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.699455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:18.996 [2024-12-06 22:09:51.699467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:19:18.996 [2024-12-06 22:09:51.699474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.722193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.722227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:18.996 [2024-12-06 22:09:51.722239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.677 ms 00:19:18.996 [2024-12-06 22:09:51.722246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.744152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.744194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:18.996 [2024-12-06 22:09:51.744207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.864 ms 00:19:18.996 [2024-12-06 22:09:51.744213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.766644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.996 [2024-12-06 22:09:51.766678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:18.996 [2024-12-06 22:09:51.766690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.334 ms 00:19:18.996 [2024-12-06 22:09:51.766697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.996 [2024-12-06 22:09:51.766740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:18.996 [2024-12-06 22:09:51.766755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.766996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:18.996 [2024-12-06 22:09:51.767224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:18.997 [2024-12-06 22:09:51.767624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:18.997 [2024-12-06 22:09:51.767633] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daef629d-1555-4c70-a65e-fbe011bc1099 00:19:18.997 [2024-12-06 22:09:51.767641] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:18.997 [2024-12-06 22:09:51.767651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:18.997 [2024-12-06 22:09:51.767658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:18.997 [2024-12-06 22:09:51.767668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:18.997 [2024-12-06 22:09:51.767675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:18.997 [2024-12-06 22:09:51.767683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:18.997 [2024-12-06 22:09:51.767690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:18.997 [2024-12-06 22:09:51.767697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:18.997 [2024-12-06 22:09:51.767704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:18.997 [2024-12-06 22:09:51.767712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.997 [2024-12-06 22:09:51.767720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:18.997 [2024-12-06 22:09:51.767729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:19:18.997 [2024-12-06 22:09:51.767736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.780382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.997 [2024-12-06 22:09:51.780414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:18.997 [2024-12-06 22:09:51.780424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:19:18.997 [2024-12-06 22:09:51.780432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.780776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.997 [2024-12-06 22:09:51.780795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:18.997 [2024-12-06 22:09:51.780805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:18.997 [2024-12-06 22:09:51.780812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.824436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.997 [2024-12-06 22:09:51.824471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.997 [2024-12-06 22:09:51.824482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.997 [2024-12-06 22:09:51.824490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.824552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.997 [2024-12-06 22:09:51.824561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.997 [2024-12-06 22:09:51.824570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.997 [2024-12-06 22:09:51.824577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.824671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.997 [2024-12-06 22:09:51.824683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.997 [2024-12-06 22:09:51.824692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.997 [2024-12-06 22:09:51.824699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.997 [2024-12-06 22:09:51.824732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.997 [2024-12-06 22:09:51.824740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.997 [2024-12-06 22:09:51.824749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.997 [2024-12-06 22:09:51.824756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.906993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.907034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.255 [2024-12-06 22:09:51.907046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.907054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.255 [2024-12-06 22:09:51.970442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.255 [2024-12-06 22:09:51.970550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.255 [2024-12-06 22:09:51.970656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.255 [2024-12-06 22:09:51.970789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:19.255 [2024-12-06 22:09:51.970874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.970924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.970932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.255 [2024-12-06 22:09:51.970941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.970950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.971008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.255 [2024-12-06 22:09:51.971017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.255 [2024-12-06 22:09:51.971026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.255 [2024-12-06 22:09:51.971033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.255 [2024-12-06 22:09:51.971208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.467 ms, result 0 00:19:19.255 true 00:19:19.255 22:09:51 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75234 00:19:19.255 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75234 ']' 00:19:19.255 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75234 00:19:19.255 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:19.255 22:09:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75234 00:19:19.255 killing process with pid 75234 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75234' 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75234 00:19:19.255 22:09:52 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75234 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.389 22:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:27.389 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:27.389 fio-3.35 00:19:27.389 Starting 1 thread 00:19:30.690 00:19:30.690 test: (groupid=0, jobs=1): err= 0: pid=75419: Fri Dec 6 22:10:03 2024 00:19:30.690 read: IOPS=1381, BW=91.7MiB/s (96.2MB/s)(255MiB/2775msec) 00:19:30.690 slat (nsec): min=2984, max=18030, avg=3756.44, stdev=1549.23 00:19:30.690 clat (usec): min=240, max=680, avg=328.36, stdev=44.99 00:19:30.690 lat (usec): min=243, max=684, avg=332.11, stdev=45.67 00:19:30.690 clat percentiles (usec): 00:19:30.690 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 289], 20.00th=[ 297], 00:19:30.691 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 322], 00:19:30.691 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 392], 95.00th=[ 433], 00:19:30.691 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 619], 99.95th=[ 660], 00:19:30.691 | 99.99th=[ 685] 00:19:30.691 write: IOPS=1391, BW=92.4MiB/s (96.9MB/s)(256MiB/2772msec); 0 zone resets 00:19:30.691 slat (usec): min=13, max=114, avg=16.77, stdev= 2.80 00:19:30.691 clat (usec): min=260, max=998, avg=359.32, stdev=59.23 00:19:30.691 lat (usec): min=276, max=1017, avg=376.08, stdev=59.41 00:19:30.691 clat percentiles (usec): 00:19:30.691 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 318], 00:19:30.691 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:19:30.691 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 433], 95.00th=[ 478], 00:19:30.691 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 947], 99.95th=[ 996], 00:19:30.691 | 99.99th=[ 996] 00:19:30.691 bw ( KiB/s): min=89760, max=102680, per=99.43%, avg=94057.60, stdev=5173.90, samples=5 00:19:30.691 iops : min= 1320, max= 1510, avg=1383.20, stdev=76.09, samples=5 00:19:30.691 lat (usec) : 250=0.01%, 500=98.40%, 750=1.50%, 1000=0.09% 00:19:30.691 cpu : usr=99.28%, sys=0.11%, ctx=5, majf=0, minf=1169 00:19:30.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.691 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.691 00:19:30.691 Run status group 0 (all jobs): 00:19:30.691 READ: bw=91.7MiB/s (96.2MB/s), 91.7MiB/s-91.7MiB/s (96.2MB/s-96.2MB/s), io=255MiB (267MB), run=2775-2775msec 00:19:30.691 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=256MiB (269MB), run=2772-2772msec 00:19:32.604 ----------------------------------------------------- 00:19:32.604 Suppressions used: 00:19:32.604 count bytes template 00:19:32.604 1 5 /usr/src/fio/parse.c 00:19:32.604 1 8 libtcmalloc_minimal.so 00:19:32.604 1 904 libcrypto.so 00:19:32.604 ----------------------------------------------------- 00:19:32.604 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:32.604 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:32.605 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.605 22:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:32.605 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:32.605 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:32.605 fio-3.35 00:19:32.605 Starting 2 threads 00:19:59.145 00:19:59.145 first_half: (groupid=0, jobs=1): err= 0: pid=75511: Fri Dec 6 22:10:28 2024 00:19:59.145 read: IOPS=2909, BW=11.4MiB/s (11.9MB/s)(255MiB/22468msec) 00:19:59.145 slat (nsec): min=3151, max=25155, avg=4082.90, stdev=841.22 00:19:59.145 clat (usec): min=643, max=279681, avg=33937.27, stdev=18327.62 00:19:59.145 lat (usec): min=650, max=279685, avg=33941.35, stdev=18327.74 00:19:59.145 clat percentiles (msec): 00:19:59.145 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 31], 00:19:59.145 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:59.145 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 45], 00:19:59.145 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 211], 99.95th=[ 243], 00:19:59.145 | 99.99th=[ 275] 00:19:59.145 write: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(256MiB/21515msec); 0 zone resets 00:19:59.145 slat (usec): min=3, max=318, avg= 5.78, stdev= 2.85 00:19:59.145 clat (usec): min=371, max=75413, avg=9999.68, stdev=16579.80 00:19:59.145 lat (usec): min=378, max=75418, avg=10005.46, stdev=16579.89 00:19:59.145 clat percentiles (usec): 00:19:59.145 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 832], 20.00th=[ 1106], 00:19:59.145 | 30.00th=[ 1975], 40.00th=[ 3425], 50.00th=[ 4686], 60.00th=[ 5473], 00:19:59.145 | 70.00th=[ 6521], 80.00th=[10290], 90.00th=[28443], 95.00th=[61080], 00:19:59.145 | 99.00th=[66323], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:19:59.145 | 99.99th=[74974] 00:19:59.145 bw ( KiB/s): min= 272, max=42168, per=86.05%, avg=20968.16, stdev=14369.43, samples=25 00:19:59.145 iops : min= 68, max=10542, avg=5242.04, stdev=3592.36, samples=25 00:19:59.145 lat (usec) : 500=0.02%, 750=2.52%, 1000=5.75% 00:19:59.145 lat (msec) : 2=6.93%, 4=6.98%, 10=19.04%, 20=5.10%, 50=47.43% 00:19:59.145 lat (msec) : 100=5.28%, 250=0.94%, 500=0.02% 00:19:59.145 cpu : usr=99.39%, sys=0.08%, ctx=59, majf=0, minf=5556 00:19:59.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:59.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.145 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.145 issued rwts: total=65378,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.145 second_half: (groupid=0, jobs=1): err= 0: pid=75512: Fri Dec 6 22:10:28 2024 00:19:59.145 read: IOPS=2922, BW=11.4MiB/s (12.0MB/s)(255MiB/22329msec) 00:19:59.145 slat (nsec): min=3187, max=23922, avg=5177.84, stdev=1017.86 00:19:59.145 clat (usec): min=592, max=284459, avg=34447.19, stdev=16647.16 00:19:59.145 lat (usec): min=598, max=284465, avg=34452.37, stdev=16647.20 00:19:59.145 clat percentiles (msec): 00:19:59.145 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:19:59.145 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:59.145 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 50], 00:19:59.145 | 99.00th=[ 127], 99.50th=[ 148], 99.90th=[ 178], 99.95th=[ 218], 00:19:59.145 | 99.99th=[ 284] 00:19:59.145 write: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(256MiB/18228msec); 0 zone resets 00:19:59.145 slat (usec): min=3, max=498, avg= 6.84, stdev= 4.69 00:19:59.145 clat (usec): min=380, max=74985, avg=9294.27, stdev=16127.79 00:19:59.145 lat (usec): min=401, max=74991, avg=9301.11, stdev=16127.83 00:19:59.145 clat percentiles (usec): 00:19:59.145 | 1.00th=[ 668], 5.00th=[ 750], 10.00th=[ 824], 20.00th=[ 1057], 00:19:59.145 | 30.00th=[ 1401], 40.00th=[ 2999], 50.00th=[ 4424], 60.00th=[ 5407], 00:19:59.145 | 70.00th=[ 6194], 80.00th=[10159], 90.00th=[13566], 95.00th=[60556], 00:19:59.145 | 99.00th=[65799], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:19:59.145 | 99.99th=[74974] 00:19:59.145 bw ( KiB/s): min= 1008, max=42168, per=100.00%, avg=27594.11, stdev=12953.71, samples=19 00:19:59.145 iops : min= 252, max=10542, avg=6898.53, stdev=3238.43, samples=19 00:19:59.145 lat (usec) : 500=0.01%, 750=2.48%, 1000=6.48% 00:19:59.145 lat (msec) : 2=8.38%, 4=6.30%, 10=16.67%, 20=6.26%, 50=46.99% 00:19:59.145 lat (msec) : 100=5.57%, 250=0.86%, 500=0.01% 00:19:59.145 cpu : usr=99.25%, sys=0.13%, ctx=44, majf=0, minf=5557 00:19:59.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:59.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.145 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.145 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.145 00:19:59.145 Run status group 0 (all jobs): 00:19:59.145 READ: bw=22.7MiB/s (23.8MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-12.0MB/s), io=510MiB (535MB), run=22329-22468msec 00:19:59.145 WRITE: bw=23.8MiB/s (25.0MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=512MiB (537MB), run=18228-21515msec 00:19:59.145 ----------------------------------------------------- 00:19:59.145 Suppressions used: 00:19:59.145 count bytes template 00:19:59.145 2 10 /usr/src/fio/parse.c 00:19:59.145 2 192 /usr/src/fio/iolog.c 00:19:59.145 1 8 libtcmalloc_minimal.so 00:19:59.145 1 904 libcrypto.so 00:19:59.145 ----------------------------------------------------- 00:19:59.145 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:59.145 22:10:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:59.145 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:59.145 fio-3.35 00:19:59.145 Starting 1 thread 00:20:17.237 00:20:17.237 test: (groupid=0, jobs=1): err= 0: pid=75810: Fri Dec 6 22:10:48 2024 00:20:17.237 read: IOPS=6487, BW=25.3MiB/s (26.6MB/s)(255MiB/10051msec) 00:20:17.237 slat (nsec): min=3008, max=78309, avg=4784.54, stdev=1160.47 00:20:17.237 clat (usec): min=1776, max=46846, avg=19722.39, stdev=2870.06 00:20:17.237 lat (usec): min=1784, max=46852, avg=19727.18, stdev=2870.04 00:20:17.237 clat percentiles (usec): 00:20:17.237 | 1.00th=[15008], 5.00th=[15664], 10.00th=[16319], 20.00th=[17433], 00:20:17.237 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[20055], 00:20:17.237 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22938], 95.00th=[24773], 00:20:17.237 | 99.00th=[29492], 99.50th=[31327], 99.90th=[36439], 99.95th=[41157], 00:20:17.237 | 99.99th=[45876] 00:20:17.237 write: IOPS=9495, BW=37.1MiB/s (38.9MB/s)(256MiB/6902msec); 0 zone resets 00:20:17.237 slat (usec): min=4, max=2498, avg= 8.62, stdev=14.19 00:20:17.237 clat (usec): min=490, max=81055, avg=13419.56, stdev=15804.15 00:20:17.237 lat (usec): min=498, max=81061, avg=13428.18, stdev=15804.20 00:20:17.237 clat percentiles (usec): 00:20:17.237 | 1.00th=[ 750], 5.00th=[ 1012], 10.00th=[ 1369], 20.00th=[ 1827], 00:20:17.237 | 30.00th=[ 2212], 40.00th=[ 3130], 50.00th=[ 9896], 60.00th=[11731], 00:20:17.237 | 70.00th=[13566], 80.00th=[16319], 90.00th=[45876], 95.00th=[50594], 00:20:17.237 | 99.00th=[62129], 99.50th=[65274], 99.90th=[68682], 99.95th=[70779], 00:20:17.237 | 99.99th=[74974] 00:20:17.237 bw ( KiB/s): min=26184, max=51184, per=98.59%, avg=37444.29, stdev=7004.01, samples=14 00:20:17.237 iops : min= 6546, max=12796, avg=9361.07, stdev=1751.00, samples=14 00:20:17.237 lat (usec) : 500=0.01%, 750=0.50%, 1000=1.89% 00:20:17.237 lat (msec) : 2=9.98%, 4=8.36%, 10=4.53%, 20=45.79%, 50=26.24% 00:20:17.237 lat (msec) : 100=2.72% 00:20:17.237 cpu : usr=98.74%, sys=0.25%, ctx=56, majf=0, minf=5565 00:20:17.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:17.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.237 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.237 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.237 00:20:17.237 Run status group 0 (all jobs): 00:20:17.237 READ: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=255MiB (267MB), run=10051-10051msec 00:20:17.237 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=256MiB (268MB), run=6902-6902msec 00:20:17.808 ----------------------------------------------------- 00:20:17.808 Suppressions used: 00:20:17.808 count bytes template 00:20:17.808 1 5 /usr/src/fio/parse.c 00:20:17.808 2 192 /usr/src/fio/iolog.c 00:20:17.808 1 8 libtcmalloc_minimal.so 00:20:17.808 1 904 libcrypto.so 00:20:17.808 ----------------------------------------------------- 00:20:17.808 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.808 Remove shared memory files 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57127 /dev/shm/spdk_tgt_trace.pid74151 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:17.808 ************************************ 00:20:17.808 END TEST ftl_fio_basic 00:20:17.808 ************************************ 00:20:17.808 00:20:17.808 real 1m6.680s 00:20:17.808 user 2m13.794s 00:20:17.808 sys 0m16.020s 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.808 22:10:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:18.069 22:10:50 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:18.069 22:10:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:18.069 22:10:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.069 22:10:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:18.069 ************************************ 00:20:18.069 START TEST ftl_bdevperf 00:20:18.069 ************************************ 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:18.069 * Looking for test storage... 00:20:18.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.069 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.070 --rc genhtml_branch_coverage=1 00:20:18.070 --rc genhtml_function_coverage=1 00:20:18.070 --rc genhtml_legend=1 00:20:18.070 --rc geninfo_all_blocks=1 00:20:18.070 --rc geninfo_unexecuted_blocks=1 00:20:18.070 00:20:18.070 ' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.070 --rc genhtml_branch_coverage=1 00:20:18.070 --rc genhtml_function_coverage=1 00:20:18.070 --rc genhtml_legend=1 00:20:18.070 --rc geninfo_all_blocks=1 00:20:18.070 --rc geninfo_unexecuted_blocks=1 00:20:18.070 00:20:18.070 ' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.070 --rc genhtml_branch_coverage=1 00:20:18.070 --rc genhtml_function_coverage=1 00:20:18.070 --rc genhtml_legend=1 00:20:18.070 --rc geninfo_all_blocks=1 00:20:18.070 --rc geninfo_unexecuted_blocks=1 00:20:18.070 00:20:18.070 ' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.070 --rc genhtml_branch_coverage=1 00:20:18.070 --rc genhtml_function_coverage=1 00:20:18.070 --rc genhtml_legend=1 00:20:18.070 --rc geninfo_all_blocks=1 00:20:18.070 --rc geninfo_unexecuted_blocks=1 00:20:18.070 00:20:18.070 ' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:18.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76087 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76087 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76087 ']' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.070 22:10:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:18.330 [2024-12-06 22:10:50.974542] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:20:18.330 [2024-12-06 22:10:50.974925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76087 ] 00:20:18.330 [2024-12-06 22:10:51.142469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.591 [2024-12-06 22:10:51.264960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:19.220 22:10:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:19.496 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:19.496 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:19.496 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:19.496 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:19.496 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:19.497 { 00:20:19.497 "name": "nvme0n1", 00:20:19.497 "aliases": [ 00:20:19.497 "d7e5acc1-15f4-42cf-96dc-f41ad554383c" 00:20:19.497 ], 00:20:19.497 "product_name": "NVMe disk", 00:20:19.497 "block_size": 4096, 00:20:19.497 "num_blocks": 1310720, 00:20:19.497 "uuid": "d7e5acc1-15f4-42cf-96dc-f41ad554383c", 00:20:19.497 "numa_id": -1, 00:20:19.497 "assigned_rate_limits": { 00:20:19.497 "rw_ios_per_sec": 0, 00:20:19.497 "rw_mbytes_per_sec": 0, 00:20:19.497 "r_mbytes_per_sec": 0, 00:20:19.497 "w_mbytes_per_sec": 0 00:20:19.497 }, 00:20:19.497 "claimed": true, 00:20:19.497 "claim_type": "read_many_write_one", 00:20:19.497 "zoned": false, 00:20:19.497 "supported_io_types": { 00:20:19.497 "read": true, 00:20:19.497 "write": true, 00:20:19.497 "unmap": true, 00:20:19.497 "flush": true, 00:20:19.497 "reset": true, 00:20:19.497 "nvme_admin": true, 00:20:19.497 "nvme_io": true, 00:20:19.497 "nvme_io_md": false, 00:20:19.497 "write_zeroes": true, 00:20:19.497 "zcopy": false, 00:20:19.497 "get_zone_info": false, 00:20:19.497 "zone_management": false, 00:20:19.497 "zone_append": false, 00:20:19.497 "compare": true, 00:20:19.497 "compare_and_write": false, 00:20:19.497 "abort": true, 00:20:19.497 "seek_hole": false, 00:20:19.497 "seek_data": false, 00:20:19.497 "copy": true, 00:20:19.497 "nvme_iov_md": false 00:20:19.497 }, 00:20:19.497 "driver_specific": { 00:20:19.497 "nvme": [ 00:20:19.497 { 00:20:19.497 "pci_address": "0000:00:11.0", 00:20:19.497 "trid": { 00:20:19.497 "trtype": "PCIe", 00:20:19.497 "traddr": "0000:00:11.0" 00:20:19.497 }, 00:20:19.497 "ctrlr_data": { 00:20:19.497 "cntlid": 0, 00:20:19.497 "vendor_id": "0x1b36", 00:20:19.497 "model_number": "QEMU NVMe Ctrl", 00:20:19.497 "serial_number": "12341", 00:20:19.497 "firmware_revision": "8.0.0", 00:20:19.497 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:19.497 "oacs": { 00:20:19.497 "security": 0, 00:20:19.497 "format": 1, 00:20:19.497 "firmware": 0, 00:20:19.497 "ns_manage": 1 00:20:19.497 }, 00:20:19.497 "multi_ctrlr": false, 00:20:19.497 "ana_reporting": false 00:20:19.497 }, 00:20:19.497 "vs": { 00:20:19.497 "nvme_version": "1.4" 00:20:19.497 }, 00:20:19.497 "ns_data": { 00:20:19.497 "id": 1, 00:20:19.497 "can_share": false 00:20:19.497 } 00:20:19.497 } 00:20:19.497 ], 00:20:19.497 "mp_policy": "active_passive" 00:20:19.497 } 00:20:19.497 } 00:20:19.497 ]' 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:19.497 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:19.758 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ae200f18-f8e2-4eaf-a258-b083b945b9ec 00:20:19.758 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:19.758 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae200f18-f8e2-4eaf-a258-b083b945b9ec 00:20:20.017 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:20.278 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da 00:20:20.278 22:10:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:20.278 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.538 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:20.538 { 00:20:20.538 "name": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:20.538 "aliases": [ 00:20:20.538 "lvs/nvme0n1p0" 00:20:20.538 ], 00:20:20.538 "product_name": "Logical Volume", 00:20:20.538 "block_size": 4096, 00:20:20.538 "num_blocks": 26476544, 00:20:20.538 "uuid": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:20.538 "assigned_rate_limits": { 00:20:20.538 "rw_ios_per_sec": 0, 00:20:20.538 "rw_mbytes_per_sec": 0, 00:20:20.538 "r_mbytes_per_sec": 0, 00:20:20.538 "w_mbytes_per_sec": 0 00:20:20.538 }, 00:20:20.538 "claimed": false, 00:20:20.538 "zoned": false, 00:20:20.538 "supported_io_types": { 00:20:20.538 "read": true, 00:20:20.538 "write": true, 00:20:20.538 "unmap": true, 00:20:20.538 "flush": false, 00:20:20.538 "reset": true, 00:20:20.538 "nvme_admin": false, 00:20:20.538 "nvme_io": false, 00:20:20.538 "nvme_io_md": false, 00:20:20.538 "write_zeroes": true, 00:20:20.538 "zcopy": false, 00:20:20.538 "get_zone_info": false, 00:20:20.538 "zone_management": false, 00:20:20.538 "zone_append": false, 00:20:20.538 "compare": false, 00:20:20.538 "compare_and_write": false, 00:20:20.538 "abort": false, 00:20:20.538 "seek_hole": true, 00:20:20.538 "seek_data": true, 00:20:20.538 "copy": false, 00:20:20.538 "nvme_iov_md": false 00:20:20.538 }, 00:20:20.538 "driver_specific": { 00:20:20.538 "lvol": { 00:20:20.538 "lvol_store_uuid": "6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da", 00:20:20.538 "base_bdev": "nvme0n1", 00:20:20.538 "thin_provision": true, 00:20:20.538 "num_allocated_clusters": 0, 00:20:20.538 "snapshot": false, 00:20:20.538 "clone": false, 00:20:20.538 "esnap_clone": false 00:20:20.538 } 00:20:20.538 } 00:20:20.538 } 00:20:20.538 ]' 00:20:20.538 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:20.538 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:20.538 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:20.539 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:20.798 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:21.059 { 00:20:21.059 "name": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:21.059 "aliases": [ 00:20:21.059 "lvs/nvme0n1p0" 00:20:21.059 ], 00:20:21.059 "product_name": "Logical Volume", 00:20:21.059 "block_size": 4096, 00:20:21.059 "num_blocks": 26476544, 00:20:21.059 "uuid": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:21.059 "assigned_rate_limits": { 00:20:21.059 "rw_ios_per_sec": 0, 00:20:21.059 "rw_mbytes_per_sec": 0, 00:20:21.059 "r_mbytes_per_sec": 0, 00:20:21.059 "w_mbytes_per_sec": 0 00:20:21.059 }, 00:20:21.059 "claimed": false, 00:20:21.059 "zoned": false, 00:20:21.059 "supported_io_types": { 00:20:21.059 "read": true, 00:20:21.059 "write": true, 00:20:21.059 "unmap": true, 00:20:21.059 "flush": false, 00:20:21.059 "reset": true, 00:20:21.059 "nvme_admin": false, 00:20:21.059 "nvme_io": false, 00:20:21.059 "nvme_io_md": false, 00:20:21.059 "write_zeroes": true, 00:20:21.059 "zcopy": false, 00:20:21.059 "get_zone_info": false, 00:20:21.059 "zone_management": false, 00:20:21.059 "zone_append": false, 00:20:21.059 "compare": false, 00:20:21.059 "compare_and_write": false, 00:20:21.059 "abort": false, 00:20:21.059 "seek_hole": true, 00:20:21.059 "seek_data": true, 00:20:21.059 "copy": false, 00:20:21.059 "nvme_iov_md": false 00:20:21.059 }, 00:20:21.059 "driver_specific": { 00:20:21.059 "lvol": { 00:20:21.059 "lvol_store_uuid": "6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da", 00:20:21.059 "base_bdev": "nvme0n1", 00:20:21.059 "thin_provision": true, 00:20:21.059 "num_allocated_clusters": 0, 00:20:21.059 "snapshot": false, 00:20:21.059 "clone": false, 00:20:21.059 "esnap_clone": false 00:20:21.059 } 00:20:21.059 } 00:20:21.059 } 00:20:21.059 ]' 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:21.059 22:10:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:21.321 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 276f03e9-707a-4113-9b4c-f51ddce465e6 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:21.580 { 00:20:21.580 "name": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:21.580 "aliases": [ 00:20:21.580 "lvs/nvme0n1p0" 00:20:21.580 ], 00:20:21.580 "product_name": "Logical Volume", 00:20:21.580 "block_size": 4096, 00:20:21.580 "num_blocks": 26476544, 00:20:21.580 "uuid": "276f03e9-707a-4113-9b4c-f51ddce465e6", 00:20:21.580 "assigned_rate_limits": { 00:20:21.580 "rw_ios_per_sec": 0, 00:20:21.580 "rw_mbytes_per_sec": 0, 00:20:21.580 "r_mbytes_per_sec": 0, 00:20:21.580 "w_mbytes_per_sec": 0 00:20:21.580 }, 00:20:21.580 "claimed": false, 00:20:21.580 "zoned": false, 00:20:21.580 "supported_io_types": { 00:20:21.580 "read": true, 00:20:21.580 "write": true, 00:20:21.580 "unmap": true, 00:20:21.580 "flush": false, 00:20:21.580 "reset": true, 00:20:21.580 "nvme_admin": false, 00:20:21.580 "nvme_io": false, 00:20:21.580 "nvme_io_md": false, 00:20:21.580 "write_zeroes": true, 00:20:21.580 "zcopy": false, 00:20:21.580 "get_zone_info": false, 00:20:21.580 "zone_management": false, 00:20:21.580 "zone_append": false, 00:20:21.580 "compare": false, 00:20:21.580 "compare_and_write": false, 00:20:21.580 "abort": false, 00:20:21.580 "seek_hole": true, 00:20:21.580 "seek_data": true, 00:20:21.580 "copy": false, 00:20:21.580 "nvme_iov_md": false 00:20:21.580 }, 00:20:21.580 "driver_specific": { 00:20:21.580 "lvol": { 00:20:21.580 "lvol_store_uuid": "6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da", 00:20:21.580 "base_bdev": "nvme0n1", 00:20:21.580 "thin_provision": true, 00:20:21.580 "num_allocated_clusters": 0, 00:20:21.580 "snapshot": false, 00:20:21.580 "clone": false, 00:20:21.580 "esnap_clone": false 00:20:21.580 } 00:20:21.580 } 00:20:21.580 } 00:20:21.580 ]' 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:21.580 22:10:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:21.581 22:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:21.581 22:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 276f03e9-707a-4113-9b4c-f51ddce465e6 -c nvc0n1p0 --l2p_dram_limit 20 00:20:21.842 [2024-12-06 22:10:54.502489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.502608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.842 [2024-12-06 22:10:54.502656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:21.842 [2024-12-06 22:10:54.502680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.502741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.502762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.842 [2024-12-06 22:10:54.502777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:21.842 [2024-12-06 22:10:54.502793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.502822] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.842 [2024-12-06 22:10:54.503450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.842 [2024-12-06 22:10:54.503526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.503564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.842 [2024-12-06 22:10:54.503582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:20:21.842 [2024-12-06 22:10:54.503598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.503656] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8c475d2d-a69b-43d9-8733-8e531e4b69ba 00:20:21.842 [2024-12-06 22:10:54.504684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.504704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:21.842 [2024-12-06 22:10:54.504715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:21.842 [2024-12-06 22:10:54.504721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.509402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.509489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.842 [2024-12-06 22:10:54.509503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:20:21.842 [2024-12-06 22:10:54.509511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.509576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.509583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.842 [2024-12-06 22:10:54.509593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:21.842 [2024-12-06 22:10:54.509599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.509635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.509642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.842 [2024-12-06 22:10:54.509649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:21.842 [2024-12-06 22:10:54.509655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.509672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.842 [2024-12-06 22:10:54.512504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.512593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.842 [2024-12-06 22:10:54.512604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:20:21.842 [2024-12-06 22:10:54.512615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.512640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.512648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.842 [2024-12-06 22:10:54.512654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:21.842 [2024-12-06 22:10:54.512661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.512672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:21.842 [2024-12-06 22:10:54.512782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:21.842 [2024-12-06 22:10:54.512791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.842 [2024-12-06 22:10:54.512800] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:21.842 [2024-12-06 22:10:54.512808] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.842 [2024-12-06 22:10:54.512816] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.842 [2024-12-06 22:10:54.512822] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:21.842 [2024-12-06 22:10:54.512829] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.842 [2024-12-06 22:10:54.512835] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:21.842 [2024-12-06 22:10:54.512842] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:21.842 [2024-12-06 22:10:54.512850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.512857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.842 [2024-12-06 22:10:54.512862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:20:21.842 [2024-12-06 22:10:54.512869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.512931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.842 [2024-12-06 22:10:54.512939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.842 [2024-12-06 22:10:54.512945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:21.842 [2024-12-06 22:10:54.512953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.842 [2024-12-06 22:10:54.513027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.842 [2024-12-06 22:10:54.513038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.842 [2024-12-06 22:10:54.513045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.842 [2024-12-06 22:10:54.513051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.843 [2024-12-06 22:10:54.513063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.843 [2024-12-06 22:10:54.513079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.843 [2024-12-06 22:10:54.513091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.843 [2024-12-06 22:10:54.513103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:21.843 [2024-12-06 22:10:54.513109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.843 [2024-12-06 22:10:54.513119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.843 [2024-12-06 22:10:54.513124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:21.843 [2024-12-06 22:10:54.513132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.843 [2024-12-06 22:10:54.513142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.843 [2024-12-06 22:10:54.513158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.843 [2024-12-06 22:10:54.513185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.843 [2024-12-06 22:10:54.513201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.843 [2024-12-06 22:10:54.513219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.843 [2024-12-06 22:10:54.513237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.843 [2024-12-06 22:10:54.513249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.843 [2024-12-06 22:10:54.513256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:21.843 [2024-12-06 22:10:54.513261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.843 [2024-12-06 22:10:54.513268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:21.843 [2024-12-06 22:10:54.513273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:21.843 [2024-12-06 22:10:54.513278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:21.843 [2024-12-06 22:10:54.513289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:21.843 [2024-12-06 22:10:54.513294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.843 [2024-12-06 22:10:54.513307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.843 [2024-12-06 22:10:54.513316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.843 [2024-12-06 22:10:54.513329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.843 [2024-12-06 22:10:54.513334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.843 [2024-12-06 22:10:54.513340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.843 [2024-12-06 22:10:54.513345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.843 [2024-12-06 22:10:54.513351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.843 [2024-12-06 22:10:54.513356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.843 [2024-12-06 22:10:54.513363] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.843 [2024-12-06 22:10:54.513370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:21.843 [2024-12-06 22:10:54.513383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:21.843 [2024-12-06 22:10:54.513390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:21.843 [2024-12-06 22:10:54.513395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:21.843 [2024-12-06 22:10:54.513402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:21.843 [2024-12-06 22:10:54.513408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:21.843 [2024-12-06 22:10:54.513414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:21.843 [2024-12-06 22:10:54.513419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:21.843 [2024-12-06 22:10:54.513428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:21.843 [2024-12-06 22:10:54.513434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:21.843 [2024-12-06 22:10:54.513465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.843 [2024-12-06 22:10:54.513471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.843 [2024-12-06 22:10:54.513485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.843 [2024-12-06 22:10:54.513491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.843 [2024-12-06 22:10:54.513497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.844 [2024-12-06 22:10:54.513503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.844 [2024-12-06 22:10:54.513509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.844 [2024-12-06 22:10:54.513517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:20:21.844 [2024-12-06 22:10:54.513523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.844 [2024-12-06 22:10:54.513561] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:21.844 [2024-12-06 22:10:54.513569] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:26.046 [2024-12-06 22:10:58.286000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.286282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:26.046 [2024-12-06 22:10:58.286317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3772.416 ms 00:20:26.046 [2024-12-06 22:10:58.286328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.318740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.318802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.046 [2024-12-06 22:10:58.318821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.161 ms 00:20:26.046 [2024-12-06 22:10:58.318831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.318987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.319000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:26.046 [2024-12-06 22:10:58.319016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:26.046 [2024-12-06 22:10:58.319024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.365779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.365836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.046 [2024-12-06 22:10:58.365854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.716 ms 00:20:26.046 [2024-12-06 22:10:58.365864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.365914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.365923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.046 [2024-12-06 22:10:58.365935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:26.046 [2024-12-06 22:10:58.365947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.366602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.366642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.046 [2024-12-06 22:10:58.366657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:26.046 [2024-12-06 22:10:58.366665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.366794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.366813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.046 [2024-12-06 22:10:58.366828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:26.046 [2024-12-06 22:10:58.366836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.382705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.382753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.046 [2024-12-06 22:10:58.382767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.845 ms 00:20:26.046 [2024-12-06 22:10:58.382784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.396155] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:26.046 [2024-12-06 22:10:58.404026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.404257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.046 [2024-12-06 22:10:58.404279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.154 ms 00:20:26.046 [2024-12-06 22:10:58.404290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.508390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.508461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:26.046 [2024-12-06 22:10:58.508477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.067 ms 00:20:26.046 [2024-12-06 22:10:58.508488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.508694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.508711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.046 [2024-12-06 22:10:58.508721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:26.046 [2024-12-06 22:10:58.508735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.535015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.535237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:26.046 [2024-12-06 22:10:58.535260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:20:26.046 [2024-12-06 22:10:58.535272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.560538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.560593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:26.046 [2024-12-06 22:10:58.560606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.224 ms 00:20:26.046 [2024-12-06 22:10:58.560616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.561233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.561255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.046 [2024-12-06 22:10:58.561266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:20:26.046 [2024-12-06 22:10:58.561276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.649558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.649621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:26.046 [2024-12-06 22:10:58.649635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.240 ms 00:20:26.046 [2024-12-06 22:10:58.649646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.046 [2024-12-06 22:10:58.678009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.046 [2024-12-06 22:10:58.678228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:26.046 [2024-12-06 22:10:58.678255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.268 ms 00:20:26.046 [2024-12-06 22:10:58.678266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.047 [2024-12-06 22:10:58.704679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.047 [2024-12-06 22:10:58.704733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:26.047 [2024-12-06 22:10:58.704745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.370 ms 00:20:26.047 [2024-12-06 22:10:58.704755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.047 [2024-12-06 22:10:58.731151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.047 [2024-12-06 22:10:58.731213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:26.047 [2024-12-06 22:10:58.731226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.348 ms 00:20:26.047 [2024-12-06 22:10:58.731236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.047 [2024-12-06 22:10:58.731289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.047 [2024-12-06 22:10:58.731305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:26.047 [2024-12-06 22:10:58.731315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.047 [2024-12-06 22:10:58.731325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.047 [2024-12-06 22:10:58.731416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.047 [2024-12-06 22:10:58.731430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:26.047 [2024-12-06 22:10:58.731439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:26.047 [2024-12-06 22:10:58.731450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.047 [2024-12-06 22:10:58.732629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4229.598 ms, result 0 00:20:26.047 { 00:20:26.047 "name": "ftl0", 00:20:26.047 "uuid": "8c475d2d-a69b-43d9-8733-8e531e4b69ba" 00:20:26.047 } 00:20:26.047 22:10:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:26.047 22:10:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:26.047 22:10:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:26.308 22:10:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:26.308 [2024-12-06 22:10:59.072860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:26.308 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:26.308 Zero copy mechanism will not be used. 00:20:26.308 Running I/O for 4 seconds... 00:20:28.630 870.00 IOPS, 57.77 MiB/s [2024-12-06T22:11:02.445Z] 1765.50 IOPS, 117.24 MiB/s [2024-12-06T22:11:03.389Z] 2115.00 IOPS, 140.45 MiB/s [2024-12-06T22:11:03.389Z] 2036.00 IOPS, 135.20 MiB/s 00:20:30.517 Latency(us) 00:20:30.517 [2024-12-06T22:11:03.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.517 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:30.517 ftl0 : 4.00 2035.34 135.16 0.00 0.00 514.49 155.18 3188.58 00:20:30.517 [2024-12-06T22:11:03.389Z] =================================================================================================================== 00:20:30.517 [2024-12-06T22:11:03.389Z] Total : 2035.34 135.16 0.00 0.00 514.49 155.18 3188.58 00:20:30.517 [2024-12-06 22:11:03.083641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:30.517 { 00:20:30.517 "results": [ 00:20:30.517 { 00:20:30.517 "job": "ftl0", 00:20:30.517 "core_mask": "0x1", 00:20:30.517 "workload": "randwrite", 00:20:30.517 "status": "finished", 00:20:30.517 "queue_depth": 1, 00:20:30.517 "io_size": 69632, 00:20:30.517 "runtime": 4.001782, 00:20:30.517 "iops": 2035.3432545800845, 00:20:30.517 "mibps": 135.15951299945874, 00:20:30.517 "io_failed": 0, 00:20:30.517 "io_timeout": 0, 00:20:30.517 "avg_latency_us": 514.4880245549417, 00:20:30.517 "min_latency_us": 155.17538461538462, 00:20:30.517 "max_latency_us": 3188.5784615384614 00:20:30.517 } 00:20:30.517 ], 00:20:30.517 "core_count": 1 00:20:30.517 } 00:20:30.517 22:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:30.517 [2024-12-06 22:11:03.195988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:30.517 Running I/O for 4 seconds... 00:20:32.418 5946.00 IOPS, 23.23 MiB/s [2024-12-06T22:11:06.236Z] 5760.00 IOPS, 22.50 MiB/s [2024-12-06T22:11:07.623Z] 5769.67 IOPS, 22.54 MiB/s [2024-12-06T22:11:07.624Z] 6424.00 IOPS, 25.09 MiB/s 00:20:34.752 Latency(us) 00:20:34.752 [2024-12-06T22:11:07.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.752 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:34.752 ftl0 : 4.03 6414.66 25.06 0.00 0.00 19890.82 247.34 83079.48 00:20:34.752 [2024-12-06T22:11:07.624Z] =================================================================================================================== 00:20:34.752 [2024-12-06T22:11:07.624Z] Total : 6414.66 25.06 0.00 0.00 19890.82 0.00 83079.48 00:20:34.752 [2024-12-06 22:11:07.231625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:34.752 { 00:20:34.752 "results": [ 00:20:34.752 { 00:20:34.752 "job": "ftl0", 00:20:34.752 "core_mask": "0x1", 00:20:34.752 "workload": "randwrite", 00:20:34.752 "status": "finished", 00:20:34.752 "queue_depth": 128, 00:20:34.752 "io_size": 4096, 00:20:34.752 "runtime": 4.02578, 00:20:34.752 "iops": 6414.657532204939, 00:20:34.752 "mibps": 25.057255985175544, 00:20:34.752 "io_failed": 0, 00:20:34.752 "io_timeout": 0, 00:20:34.752 "avg_latency_us": 19890.820215422744, 00:20:34.752 "min_latency_us": 247.3353846153846, 00:20:34.752 "max_latency_us": 83079.48307692308 00:20:34.752 } 00:20:34.752 ], 00:20:34.752 "core_count": 1 00:20:34.752 } 00:20:34.752 22:11:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:34.752 [2024-12-06 22:11:07.330816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:34.752 Running I/O for 4 seconds... 00:20:36.639 4822.00 IOPS, 18.84 MiB/s [2024-12-06T22:11:10.455Z] 4720.50 IOPS, 18.44 MiB/s [2024-12-06T22:11:11.398Z] 4836.00 IOPS, 18.89 MiB/s [2024-12-06T22:11:11.398Z] 5188.50 IOPS, 20.27 MiB/s 00:20:38.526 Latency(us) 00:20:38.526 [2024-12-06T22:11:11.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.526 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:38.526 Verification LBA range: start 0x0 length 0x1400000 00:20:38.526 ftl0 : 4.02 5198.02 20.30 0.00 0.00 24547.04 299.32 38716.65 00:20:38.526 [2024-12-06T22:11:11.398Z] =================================================================================================================== 00:20:38.526 [2024-12-06T22:11:11.399Z] Total : 5198.02 20.30 0.00 0.00 24547.04 0.00 38716.65 00:20:38.527 [2024-12-06 22:11:11.364224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:38.527 { 00:20:38.527 "results": [ 00:20:38.527 { 00:20:38.527 "job": "ftl0", 00:20:38.527 "core_mask": "0x1", 00:20:38.527 "workload": "verify", 00:20:38.527 "status": "finished", 00:20:38.527 "verify_range": { 00:20:38.527 "start": 0, 00:20:38.527 "length": 20971520 00:20:38.527 }, 00:20:38.527 "queue_depth": 128, 00:20:38.527 "io_size": 4096, 00:20:38.527 "runtime": 4.017296, 00:20:38.527 "iops": 5198.023745325214, 00:20:38.527 "mibps": 20.304780255176617, 00:20:38.527 "io_failed": 0, 00:20:38.527 "io_timeout": 0, 00:20:38.527 "avg_latency_us": 24547.03591786817, 00:20:38.527 "min_latency_us": 299.32307692307694, 00:20:38.527 "max_latency_us": 38716.65230769231 00:20:38.527 } 00:20:38.527 ], 00:20:38.527 "core_count": 1 00:20:38.527 } 00:20:38.527 22:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:38.788 [2024-12-06 22:11:11.579742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.788 [2024-12-06 22:11:11.579813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.788 [2024-12-06 22:11:11.579829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:38.788 [2024-12-06 22:11:11.579841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.788 [2024-12-06 22:11:11.579864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.788 [2024-12-06 22:11:11.582993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.788 [2024-12-06 22:11:11.583037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.788 [2024-12-06 22:11:11.583052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:20:38.788 [2024-12-06 22:11:11.583061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.788 [2024-12-06 22:11:11.586419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.788 [2024-12-06 22:11:11.586470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.788 [2024-12-06 22:11:11.586493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:20:38.788 [2024-12-06 22:11:11.586502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.049 [2024-12-06 22:11:11.802850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.049 [2024-12-06 22:11:11.802917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.049 [2024-12-06 22:11:11.802940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.317 ms 00:20:39.049 [2024-12-06 22:11:11.802950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.049 [2024-12-06 22:11:11.809261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.049 [2024-12-06 22:11:11.809456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.049 [2024-12-06 22:11:11.809485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:20:39.049 [2024-12-06 22:11:11.809498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.049 [2024-12-06 22:11:11.836998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.049 [2024-12-06 22:11:11.837212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.050 [2024-12-06 22:11:11.837243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.425 ms 00:20:39.050 [2024-12-06 22:11:11.837251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.050 [2024-12-06 22:11:11.855842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.050 [2024-12-06 22:11:11.855902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.050 [2024-12-06 22:11:11.855919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 00:20:39.050 [2024-12-06 22:11:11.855927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.050 [2024-12-06 22:11:11.856121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.050 [2024-12-06 22:11:11.856134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.050 [2024-12-06 22:11:11.856148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:39.050 [2024-12-06 22:11:11.856156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.050 [2024-12-06 22:11:11.883164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.050 [2024-12-06 22:11:11.883369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.050 [2024-12-06 22:11:11.883397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.986 ms 00:20:39.050 [2024-12-06 22:11:11.883404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.050 [2024-12-06 22:11:11.909696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.050 [2024-12-06 22:11:11.909746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.050 [2024-12-06 22:11:11.909761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.242 ms 00:20:39.050 [2024-12-06 22:11:11.909768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.312 [2024-12-06 22:11:11.935138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.312 [2024-12-06 22:11:11.935375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.312 [2024-12-06 22:11:11.935403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.313 ms 00:20:39.312 [2024-12-06 22:11:11.935411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.312 [2024-12-06 22:11:11.960739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.312 [2024-12-06 22:11:11.960789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.312 [2024-12-06 22:11:11.960807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.226 ms 00:20:39.312 [2024-12-06 22:11:11.960814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.312 [2024-12-06 22:11:11.960866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.312 [2024-12-06 22:11:11.960882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.312 [2024-12-06 22:11:11.960986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.960994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.313 [2024-12-06 22:11:11.961866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.313 [2024-12-06 22:11:11.961877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8c475d2d-a69b-43d9-8733-8e531e4b69ba 00:20:39.314 [2024-12-06 22:11:11.961888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.314 [2024-12-06 22:11:11.961897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.314 [2024-12-06 22:11:11.961904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.314 [2024-12-06 22:11:11.961913] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.314 [2024-12-06 22:11:11.961921] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.314 [2024-12-06 22:11:11.961931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.314 [2024-12-06 22:11:11.961938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.314 [2024-12-06 22:11:11.961948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.314 [2024-12-06 22:11:11.961954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.314 [2024-12-06 22:11:11.961963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.314 [2024-12-06 22:11:11.961970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.314 [2024-12-06 22:11:11.961983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:20:39.314 [2024-12-06 22:11:11.961990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:11.975569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.314 [2024-12-06 22:11:11.975617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.314 [2024-12-06 22:11:11.975633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.534 ms 00:20:39.314 [2024-12-06 22:11:11.975641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:11.976066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.314 [2024-12-06 22:11:11.976077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.314 [2024-12-06 22:11:11.976090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:20:39.314 [2024-12-06 22:11:11.976098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.015518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.015568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.314 [2024-12-06 22:11:12.015586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.015596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.015669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.015677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.314 [2024-12-06 22:11:12.015688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.015695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.015806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.015817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.314 [2024-12-06 22:11:12.015828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.015835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.015855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.015863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.314 [2024-12-06 22:11:12.015873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.015881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.100372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.100433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.314 [2024-12-06 22:11:12.100452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.100460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.314 [2024-12-06 22:11:12.169471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.314 [2024-12-06 22:11:12.169594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.314 [2024-12-06 22:11:12.169692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.314 [2024-12-06 22:11:12.169828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.314 [2024-12-06 22:11:12.169890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.169940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.169952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.314 [2024-12-06 22:11:12.169963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.169979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.170030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.314 [2024-12-06 22:11:12.170041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.314 [2024-12-06 22:11:12.170051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.314 [2024-12-06 22:11:12.170059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.314 [2024-12-06 22:11:12.170246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.417 ms, result 0 00:20:39.314 true 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76087 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76087 ']' 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76087 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76087 00:20:39.576 killing process with pid 76087 00:20:39.576 Received shutdown signal, test time was about 4.000000 seconds 00:20:39.576 00:20:39.576 Latency(us) 00:20:39.576 [2024-12-06T22:11:12.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.576 [2024-12-06T22:11:12.448Z] =================================================================================================================== 00:20:39.576 [2024-12-06T22:11:12.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76087' 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76087 00:20:39.576 22:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76087 00:20:42.880 Remove shared memory files 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:42.880 ************************************ 00:20:42.880 END TEST ftl_bdevperf 00:20:42.880 ************************************ 00:20:42.880 00:20:42.880 real 0m24.462s 00:20:42.880 user 0m26.856s 00:20:42.880 sys 0m0.950s 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.880 22:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 22:11:15 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:42.880 22:11:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:42.880 22:11:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.881 22:11:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:42.881 ************************************ 00:20:42.881 START TEST ftl_trim 00:20:42.881 ************************************ 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:42.881 * Looking for test storage... 00:20:42.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.881 22:11:15 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.881 --rc genhtml_branch_coverage=1 00:20:42.881 --rc genhtml_function_coverage=1 00:20:42.881 --rc genhtml_legend=1 00:20:42.881 --rc geninfo_all_blocks=1 00:20:42.881 --rc geninfo_unexecuted_blocks=1 00:20:42.881 00:20:42.881 ' 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.881 --rc genhtml_branch_coverage=1 00:20:42.881 --rc genhtml_function_coverage=1 00:20:42.881 --rc genhtml_legend=1 00:20:42.881 --rc geninfo_all_blocks=1 00:20:42.881 --rc geninfo_unexecuted_blocks=1 00:20:42.881 00:20:42.881 ' 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.881 --rc genhtml_branch_coverage=1 00:20:42.881 --rc genhtml_function_coverage=1 00:20:42.881 --rc genhtml_legend=1 00:20:42.881 --rc geninfo_all_blocks=1 00:20:42.881 --rc geninfo_unexecuted_blocks=1 00:20:42.881 00:20:42.881 ' 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.881 --rc genhtml_branch_coverage=1 00:20:42.881 --rc genhtml_function_coverage=1 00:20:42.881 --rc genhtml_legend=1 00:20:42.881 --rc geninfo_all_blocks=1 00:20:42.881 --rc geninfo_unexecuted_blocks=1 00:20:42.881 00:20:42.881 ' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76440 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76440 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76440 ']' 00:20:42.881 22:11:15 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.881 22:11:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:42.881 [2024-12-06 22:11:15.541463] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:20:42.881 [2024-12-06 22:11:15.541808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76440 ] 00:20:42.881 [2024-12-06 22:11:15.705465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:43.142 [2024-12-06 22:11:15.833067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.142 [2024-12-06 22:11:15.833487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.142 [2024-12-06 22:11:15.833411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.713 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.713 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:43.713 22:11:16 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:43.974 22:11:16 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:43.974 22:11:16 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:43.974 22:11:16 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:43.974 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:43.974 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:43.974 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:43.974 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:43.974 22:11:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.236 { 00:20:44.236 "name": "nvme0n1", 00:20:44.236 "aliases": [ 00:20:44.236 "54c15be9-17af-4cf4-bb53-6b49b659117b" 00:20:44.236 ], 00:20:44.236 "product_name": "NVMe disk", 00:20:44.236 "block_size": 4096, 00:20:44.236 "num_blocks": 1310720, 00:20:44.236 "uuid": "54c15be9-17af-4cf4-bb53-6b49b659117b", 00:20:44.236 "numa_id": -1, 00:20:44.236 "assigned_rate_limits": { 00:20:44.236 "rw_ios_per_sec": 0, 00:20:44.236 "rw_mbytes_per_sec": 0, 00:20:44.236 "r_mbytes_per_sec": 0, 00:20:44.236 "w_mbytes_per_sec": 0 00:20:44.236 }, 00:20:44.236 "claimed": true, 00:20:44.236 "claim_type": "read_many_write_one", 00:20:44.236 "zoned": false, 00:20:44.236 "supported_io_types": { 00:20:44.236 "read": true, 00:20:44.236 "write": true, 00:20:44.236 "unmap": true, 00:20:44.236 "flush": true, 00:20:44.236 "reset": true, 00:20:44.236 "nvme_admin": true, 00:20:44.236 "nvme_io": true, 00:20:44.236 "nvme_io_md": false, 00:20:44.236 "write_zeroes": true, 00:20:44.236 "zcopy": false, 00:20:44.236 "get_zone_info": false, 00:20:44.236 "zone_management": false, 00:20:44.236 "zone_append": false, 00:20:44.236 "compare": true, 00:20:44.236 "compare_and_write": false, 00:20:44.236 "abort": true, 00:20:44.236 "seek_hole": false, 00:20:44.236 "seek_data": false, 00:20:44.236 "copy": true, 00:20:44.236 "nvme_iov_md": false 00:20:44.236 }, 00:20:44.236 "driver_specific": { 00:20:44.236 "nvme": [ 00:20:44.236 { 00:20:44.236 "pci_address": "0000:00:11.0", 00:20:44.236 "trid": { 00:20:44.236 "trtype": "PCIe", 00:20:44.236 "traddr": "0000:00:11.0" 00:20:44.236 }, 00:20:44.236 "ctrlr_data": { 00:20:44.236 "cntlid": 0, 00:20:44.236 "vendor_id": "0x1b36", 00:20:44.236 "model_number": "QEMU NVMe Ctrl", 00:20:44.236 "serial_number": "12341", 00:20:44.236 "firmware_revision": "8.0.0", 00:20:44.236 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:44.236 "oacs": { 00:20:44.236 "security": 0, 00:20:44.236 "format": 1, 00:20:44.236 "firmware": 0, 00:20:44.236 "ns_manage": 1 00:20:44.236 }, 00:20:44.236 "multi_ctrlr": false, 00:20:44.236 "ana_reporting": false 00:20:44.236 }, 00:20:44.236 "vs": { 00:20:44.236 "nvme_version": "1.4" 00:20:44.236 }, 00:20:44.236 "ns_data": { 00:20:44.236 "id": 1, 00:20:44.236 "can_share": false 00:20:44.236 } 00:20:44.236 } 00:20:44.236 ], 00:20:44.236 "mp_policy": "active_passive" 00:20:44.236 } 00:20:44.236 } 00:20:44.236 ]' 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:44.236 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:44.236 22:11:17 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:44.236 22:11:17 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:44.236 22:11:17 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:44.236 22:11:17 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:44.236 22:11:17 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:44.498 22:11:17 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da 00:20:44.499 22:11:17 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:44.499 22:11:17 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ae63fbe-9eed-4ecc-afaa-5f8b3e4d16da 00:20:44.760 22:11:17 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:45.022 22:11:17 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=05089f38-eae2-42db-a370-e26f2e0f1ae8 00:20:45.022 22:11:17 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 05089f38-eae2-42db-a370-e26f2e0f1ae8 00:20:45.283 22:11:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.283 22:11:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.283 22:11:17 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:45.283 22:11:17 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:45.284 22:11:17 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.284 22:11:17 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:45.284 22:11:17 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.284 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.284 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.284 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:45.284 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:45.284 22:11:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:45.546 { 00:20:45.546 "name": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:45.546 "aliases": [ 00:20:45.546 "lvs/nvme0n1p0" 00:20:45.546 ], 00:20:45.546 "product_name": "Logical Volume", 00:20:45.546 "block_size": 4096, 00:20:45.546 "num_blocks": 26476544, 00:20:45.546 "uuid": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:45.546 "assigned_rate_limits": { 00:20:45.546 "rw_ios_per_sec": 0, 00:20:45.546 "rw_mbytes_per_sec": 0, 00:20:45.546 "r_mbytes_per_sec": 0, 00:20:45.546 "w_mbytes_per_sec": 0 00:20:45.546 }, 00:20:45.546 "claimed": false, 00:20:45.546 "zoned": false, 00:20:45.546 "supported_io_types": { 00:20:45.546 "read": true, 00:20:45.546 "write": true, 00:20:45.546 "unmap": true, 00:20:45.546 "flush": false, 00:20:45.546 "reset": true, 00:20:45.546 "nvme_admin": false, 00:20:45.546 "nvme_io": false, 00:20:45.546 "nvme_io_md": false, 00:20:45.546 "write_zeroes": true, 00:20:45.546 "zcopy": false, 00:20:45.546 "get_zone_info": false, 00:20:45.546 "zone_management": false, 00:20:45.546 "zone_append": false, 00:20:45.546 "compare": false, 00:20:45.546 "compare_and_write": false, 00:20:45.546 "abort": false, 00:20:45.546 "seek_hole": true, 00:20:45.546 "seek_data": true, 00:20:45.546 "copy": false, 00:20:45.546 "nvme_iov_md": false 00:20:45.546 }, 00:20:45.546 "driver_specific": { 00:20:45.546 "lvol": { 00:20:45.546 "lvol_store_uuid": "05089f38-eae2-42db-a370-e26f2e0f1ae8", 00:20:45.546 "base_bdev": "nvme0n1", 00:20:45.546 "thin_provision": true, 00:20:45.546 "num_allocated_clusters": 0, 00:20:45.546 "snapshot": false, 00:20:45.546 "clone": false, 00:20:45.546 "esnap_clone": false 00:20:45.546 } 00:20:45.546 } 00:20:45.546 } 00:20:45.546 ]' 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:45.546 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:45.546 22:11:18 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:45.546 22:11:18 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:45.546 22:11:18 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:45.807 22:11:18 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:45.807 22:11:18 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:45.807 22:11:18 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.807 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35856be3-973f-403e-b966-f691cf3cd24d 00:20:45.807 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.807 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:45.807 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:45.807 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35856be3-973f-403e-b966-f691cf3cd24d 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:46.069 { 00:20:46.069 "name": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:46.069 "aliases": [ 00:20:46.069 "lvs/nvme0n1p0" 00:20:46.069 ], 00:20:46.069 "product_name": "Logical Volume", 00:20:46.069 "block_size": 4096, 00:20:46.069 "num_blocks": 26476544, 00:20:46.069 "uuid": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:46.069 "assigned_rate_limits": { 00:20:46.069 "rw_ios_per_sec": 0, 00:20:46.069 "rw_mbytes_per_sec": 0, 00:20:46.069 "r_mbytes_per_sec": 0, 00:20:46.069 "w_mbytes_per_sec": 0 00:20:46.069 }, 00:20:46.069 "claimed": false, 00:20:46.069 "zoned": false, 00:20:46.069 "supported_io_types": { 00:20:46.069 "read": true, 00:20:46.069 "write": true, 00:20:46.069 "unmap": true, 00:20:46.069 "flush": false, 00:20:46.069 "reset": true, 00:20:46.069 "nvme_admin": false, 00:20:46.069 "nvme_io": false, 00:20:46.069 "nvme_io_md": false, 00:20:46.069 "write_zeroes": true, 00:20:46.069 "zcopy": false, 00:20:46.069 "get_zone_info": false, 00:20:46.069 "zone_management": false, 00:20:46.069 "zone_append": false, 00:20:46.069 "compare": false, 00:20:46.069 "compare_and_write": false, 00:20:46.069 "abort": false, 00:20:46.069 "seek_hole": true, 00:20:46.069 "seek_data": true, 00:20:46.069 "copy": false, 00:20:46.069 "nvme_iov_md": false 00:20:46.069 }, 00:20:46.069 "driver_specific": { 00:20:46.069 "lvol": { 00:20:46.069 "lvol_store_uuid": "05089f38-eae2-42db-a370-e26f2e0f1ae8", 00:20:46.069 "base_bdev": "nvme0n1", 00:20:46.069 "thin_provision": true, 00:20:46.069 "num_allocated_clusters": 0, 00:20:46.069 "snapshot": false, 00:20:46.069 "clone": false, 00:20:46.069 "esnap_clone": false 00:20:46.069 } 00:20:46.069 } 00:20:46.069 } 00:20:46.069 ]' 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:46.069 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:46.069 22:11:18 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:46.069 22:11:18 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:46.330 22:11:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:46.330 22:11:18 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:46.330 22:11:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 35856be3-973f-403e-b966-f691cf3cd24d 00:20:46.330 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=35856be3-973f-403e-b966-f691cf3cd24d 00:20:46.330 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:46.330 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:46.330 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:46.330 22:11:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35856be3-973f-403e-b966-f691cf3cd24d 00:20:46.330 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:46.330 { 00:20:46.330 "name": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:46.330 "aliases": [ 00:20:46.330 "lvs/nvme0n1p0" 00:20:46.330 ], 00:20:46.330 "product_name": "Logical Volume", 00:20:46.330 "block_size": 4096, 00:20:46.330 "num_blocks": 26476544, 00:20:46.330 "uuid": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:46.330 "assigned_rate_limits": { 00:20:46.330 "rw_ios_per_sec": 0, 00:20:46.330 "rw_mbytes_per_sec": 0, 00:20:46.330 "r_mbytes_per_sec": 0, 00:20:46.330 "w_mbytes_per_sec": 0 00:20:46.330 }, 00:20:46.330 "claimed": false, 00:20:46.330 "zoned": false, 00:20:46.330 "supported_io_types": { 00:20:46.330 "read": true, 00:20:46.330 "write": true, 00:20:46.330 "unmap": true, 00:20:46.330 "flush": false, 00:20:46.330 "reset": true, 00:20:46.330 "nvme_admin": false, 00:20:46.330 "nvme_io": false, 00:20:46.330 "nvme_io_md": false, 00:20:46.330 "write_zeroes": true, 00:20:46.330 "zcopy": false, 00:20:46.330 "get_zone_info": false, 00:20:46.330 "zone_management": false, 00:20:46.330 "zone_append": false, 00:20:46.330 "compare": false, 00:20:46.331 "compare_and_write": false, 00:20:46.331 "abort": false, 00:20:46.331 "seek_hole": true, 00:20:46.331 "seek_data": true, 00:20:46.331 "copy": false, 00:20:46.331 "nvme_iov_md": false 00:20:46.331 }, 00:20:46.331 "driver_specific": { 00:20:46.331 "lvol": { 00:20:46.331 "lvol_store_uuid": "05089f38-eae2-42db-a370-e26f2e0f1ae8", 00:20:46.331 "base_bdev": "nvme0n1", 00:20:46.331 "thin_provision": true, 00:20:46.331 "num_allocated_clusters": 0, 00:20:46.331 "snapshot": false, 00:20:46.331 "clone": false, 00:20:46.331 "esnap_clone": false 00:20:46.331 } 00:20:46.331 } 00:20:46.331 } 00:20:46.331 ]' 00:20:46.331 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:46.591 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:46.591 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:46.591 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:46.591 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:46.591 22:11:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:46.591 22:11:19 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:46.591 22:11:19 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35856be3-973f-403e-b966-f691cf3cd24d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:46.591 [2024-12-06 22:11:19.456134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.591 [2024-12-06 22:11:19.456189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.591 [2024-12-06 22:11:19.456207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.591 [2024-12-06 22:11:19.456216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.458947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.458978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.853 [2024-12-06 22:11:19.458991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:20:46.853 [2024-12-06 22:11:19.458999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.459097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.853 [2024-12-06 22:11:19.459831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.853 [2024-12-06 22:11:19.459853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.459862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.853 [2024-12-06 22:11:19.459872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:20:46.853 [2024-12-06 22:11:19.459879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.459945] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:20:46.853 [2024-12-06 22:11:19.460944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.460969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:46.853 [2024-12-06 22:11:19.460978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:46.853 [2024-12-06 22:11:19.460987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.466025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.466131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.853 [2024-12-06 22:11:19.466208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.979 ms 00:20:46.853 [2024-12-06 22:11:19.466239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.466406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.466442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.853 [2024-12-06 22:11:19.466466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:46.853 [2024-12-06 22:11:19.466528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.466577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.466605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.853 [2024-12-06 22:11:19.466982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.853 [2024-12-06 22:11:19.467035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.467136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:46.853 [2024-12-06 22:11:19.470358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.470440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.853 [2024-12-06 22:11:19.470503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:20:46.853 [2024-12-06 22:11:19.470522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.470588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.853 [2024-12-06 22:11:19.470696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.853 [2024-12-06 22:11:19.470737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.853 [2024-12-06 22:11:19.470752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.853 [2024-12-06 22:11:19.470780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:46.853 [2024-12-06 22:11:19.470905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.853 [2024-12-06 22:11:19.471032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.853 [2024-12-06 22:11:19.471062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.853 [2024-12-06 22:11:19.471091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.853 [2024-12-06 22:11:19.471119] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.853 [2024-12-06 22:11:19.471147] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:46.853 [2024-12-06 22:11:19.471213] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.853 [2024-12-06 22:11:19.471235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.854 [2024-12-06 22:11:19.471255] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.854 [2024-12-06 22:11:19.471272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.854 [2024-12-06 22:11:19.471287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.854 [2024-12-06 22:11:19.471303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:46.854 [2024-12-06 22:11:19.471317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.854 [2024-12-06 22:11:19.471473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.854 [2024-12-06 22:11:19.471492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.854 [2024-12-06 22:11:19.471510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:46.854 [2024-12-06 22:11:19.471524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.854 [2024-12-06 22:11:19.471622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.854 [2024-12-06 22:11:19.471644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.854 [2024-12-06 22:11:19.471662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.854 [2024-12-06 22:11:19.471676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.471693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.854 [2024-12-06 22:11:19.471742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.471762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:46.854 [2024-12-06 22:11:19.471776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.854 [2024-12-06 22:11:19.471792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:46.854 [2024-12-06 22:11:19.471837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.854 [2024-12-06 22:11:19.471855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.854 [2024-12-06 22:11:19.471869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:46.854 [2024-12-06 22:11:19.471885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.854 [2024-12-06 22:11:19.471922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.854 [2024-12-06 22:11:19.471944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:46.854 [2024-12-06 22:11:19.471959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.471976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.854 [2024-12-06 22:11:19.471998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.854 [2024-12-06 22:11:19.472072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.854 [2024-12-06 22:11:19.472122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.854 [2024-12-06 22:11:19.472170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.854 [2024-12-06 22:11:19.472283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.854 [2024-12-06 22:11:19.472359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.854 [2024-12-06 22:11:19.472388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.854 [2024-12-06 22:11:19.472402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:46.854 [2024-12-06 22:11:19.472447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.854 [2024-12-06 22:11:19.472467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.854 [2024-12-06 22:11:19.472484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:46.854 [2024-12-06 22:11:19.472498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.854 [2024-12-06 22:11:19.472527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:46.854 [2024-12-06 22:11:19.472542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.854 [2024-12-06 22:11:19.472612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.854 [2024-12-06 22:11:19.472631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.854 [2024-12-06 22:11:19.472662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.854 [2024-12-06 22:11:19.472678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.854 [2024-12-06 22:11:19.472692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.854 [2024-12-06 22:11:19.472707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.854 [2024-12-06 22:11:19.472755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.854 [2024-12-06 22:11:19.472773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.854 [2024-12-06 22:11:19.472789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.854 [2024-12-06 22:11:19.472817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.472845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:46.854 [2024-12-06 22:11:19.472872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:46.854 [2024-12-06 22:11:19.472920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:46.854 [2024-12-06 22:11:19.472946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:46.854 [2024-12-06 22:11:19.472971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:46.854 [2024-12-06 22:11:19.472994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:46.854 [2024-12-06 22:11:19.473016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:46.854 [2024-12-06 22:11:19.473039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:46.854 [2024-12-06 22:11:19.473087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:46.854 [2024-12-06 22:11:19.473114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:46.854 [2024-12-06 22:11:19.473314] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.854 [2024-12-06 22:11:19.473346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.854 [2024-12-06 22:11:19.473398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.854 [2024-12-06 22:11:19.473423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.854 [2024-12-06 22:11:19.473482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.854 [2024-12-06 22:11:19.473509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.854 [2024-12-06 22:11:19.473526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.854 [2024-12-06 22:11:19.473544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:20:46.854 [2024-12-06 22:11:19.473561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.855 [2024-12-06 22:11:19.473628] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:46.855 [2024-12-06 22:11:19.473675] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:49.471 [2024-12-06 22:11:22.216047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.471 [2024-12-06 22:11:22.216324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:49.471 [2024-12-06 22:11:22.216421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2742.402 ms 00:20:49.471 [2024-12-06 22:11:22.216454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.471 [2024-12-06 22:11:22.244689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.471 [2024-12-06 22:11:22.244853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.471 [2024-12-06 22:11:22.244917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.927 ms 00:20:49.471 [2024-12-06 22:11:22.244943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.471 [2024-12-06 22:11:22.245120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.471 [2024-12-06 22:11:22.245158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.471 [2024-12-06 22:11:22.245210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:49.471 [2024-12-06 22:11:22.245367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.471 [2024-12-06 22:11:22.289751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.289929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.758 [2024-12-06 22:11:22.289950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.346 ms 00:20:49.758 [2024-12-06 22:11:22.289966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.290076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.290098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.758 [2024-12-06 22:11:22.290109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.758 [2024-12-06 22:11:22.290120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.290542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.290570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.758 [2024-12-06 22:11:22.290582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:49.758 [2024-12-06 22:11:22.290592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.290710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.290725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.758 [2024-12-06 22:11:22.290750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:49.758 [2024-12-06 22:11:22.290763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.306739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.306769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.758 [2024-12-06 22:11:22.306780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.945 ms 00:20:49.758 [2024-12-06 22:11:22.306790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.318926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:49.758 [2024-12-06 22:11:22.336012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.336046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:49.758 [2024-12-06 22:11:22.336059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.125 ms 00:20:49.758 [2024-12-06 22:11:22.336067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.408878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.408915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:49.758 [2024-12-06 22:11:22.408928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.744 ms 00:20:49.758 [2024-12-06 22:11:22.408936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.409147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.409159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:49.758 [2024-12-06 22:11:22.409183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:49.758 [2024-12-06 22:11:22.409192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.431807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.431840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:49.758 [2024-12-06 22:11:22.431853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:20:49.758 [2024-12-06 22:11:22.431861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.454420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.454570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:49.758 [2024-12-06 22:11:22.454591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.496 ms 00:20:49.758 [2024-12-06 22:11:22.454599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.455213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.455233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.758 [2024-12-06 22:11:22.455244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:49.758 [2024-12-06 22:11:22.455252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.524759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.524791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:49.758 [2024-12-06 22:11:22.524806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.470 ms 00:20:49.758 [2024-12-06 22:11:22.524814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.549345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.549375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:49.758 [2024-12-06 22:11:22.549388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.442 ms 00:20:49.758 [2024-12-06 22:11:22.549396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.572258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.572324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:49.758 [2024-12-06 22:11:22.572340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.800 ms 00:20:49.758 [2024-12-06 22:11:22.572348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.596252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.596297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:49.758 [2024-12-06 22:11:22.596309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.826 ms 00:20:49.758 [2024-12-06 22:11:22.596316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.596394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.596408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:49.758 [2024-12-06 22:11:22.596421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:49.758 [2024-12-06 22:11:22.596429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.596509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.758 [2024-12-06 22:11:22.596518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:49.758 [2024-12-06 22:11:22.596529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:49.758 [2024-12-06 22:11:22.596536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.758 [2024-12-06 22:11:22.597680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.758 [2024-12-06 22:11:22.600657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3141.028 ms, result 0 00:20:49.758 { 00:20:49.758 "name": "ftl0", 00:20:49.758 "uuid": "fdfad126-9ab1-4e1e-9416-c27cc38aa4c4" 00:20:49.758 } 00:20:49.758 [2024-12-06 22:11:22.601379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.758 22:11:22 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:49.758 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:50.017 22:11:22 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:50.277 [ 00:20:50.277 { 00:20:50.277 "name": "ftl0", 00:20:50.277 "aliases": [ 00:20:50.277 "fdfad126-9ab1-4e1e-9416-c27cc38aa4c4" 00:20:50.277 ], 00:20:50.277 "product_name": "FTL disk", 00:20:50.277 "block_size": 4096, 00:20:50.277 "num_blocks": 23592960, 00:20:50.277 "uuid": "fdfad126-9ab1-4e1e-9416-c27cc38aa4c4", 00:20:50.277 "assigned_rate_limits": { 00:20:50.277 "rw_ios_per_sec": 0, 00:20:50.277 "rw_mbytes_per_sec": 0, 00:20:50.277 "r_mbytes_per_sec": 0, 00:20:50.277 "w_mbytes_per_sec": 0 00:20:50.277 }, 00:20:50.277 "claimed": false, 00:20:50.277 "zoned": false, 00:20:50.277 "supported_io_types": { 00:20:50.277 "read": true, 00:20:50.277 "write": true, 00:20:50.277 "unmap": true, 00:20:50.277 "flush": true, 00:20:50.277 "reset": false, 00:20:50.277 "nvme_admin": false, 00:20:50.277 "nvme_io": false, 00:20:50.277 "nvme_io_md": false, 00:20:50.277 "write_zeroes": true, 00:20:50.277 "zcopy": false, 00:20:50.277 "get_zone_info": false, 00:20:50.277 "zone_management": false, 00:20:50.277 "zone_append": false, 00:20:50.277 "compare": false, 00:20:50.277 "compare_and_write": false, 00:20:50.277 "abort": false, 00:20:50.277 "seek_hole": false, 00:20:50.277 "seek_data": false, 00:20:50.277 "copy": false, 00:20:50.277 "nvme_iov_md": false 00:20:50.277 }, 00:20:50.277 "driver_specific": { 00:20:50.277 "ftl": { 00:20:50.277 "base_bdev": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:50.277 "cache": "nvc0n1p0" 00:20:50.277 } 00:20:50.277 } 00:20:50.277 } 00:20:50.277 ] 00:20:50.277 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:50.277 22:11:23 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:50.277 22:11:23 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:50.534 22:11:23 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:50.534 22:11:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:50.793 22:11:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:50.793 { 00:20:50.793 "name": "ftl0", 00:20:50.793 "aliases": [ 00:20:50.793 "fdfad126-9ab1-4e1e-9416-c27cc38aa4c4" 00:20:50.793 ], 00:20:50.793 "product_name": "FTL disk", 00:20:50.793 "block_size": 4096, 00:20:50.793 "num_blocks": 23592960, 00:20:50.793 "uuid": "fdfad126-9ab1-4e1e-9416-c27cc38aa4c4", 00:20:50.793 "assigned_rate_limits": { 00:20:50.793 "rw_ios_per_sec": 0, 00:20:50.793 "rw_mbytes_per_sec": 0, 00:20:50.793 "r_mbytes_per_sec": 0, 00:20:50.793 "w_mbytes_per_sec": 0 00:20:50.793 }, 00:20:50.793 "claimed": false, 00:20:50.793 "zoned": false, 00:20:50.793 "supported_io_types": { 00:20:50.793 "read": true, 00:20:50.793 "write": true, 00:20:50.793 "unmap": true, 00:20:50.793 "flush": true, 00:20:50.793 "reset": false, 00:20:50.793 "nvme_admin": false, 00:20:50.793 "nvme_io": false, 00:20:50.793 "nvme_io_md": false, 00:20:50.793 "write_zeroes": true, 00:20:50.793 "zcopy": false, 00:20:50.793 "get_zone_info": false, 00:20:50.793 "zone_management": false, 00:20:50.793 "zone_append": false, 00:20:50.793 "compare": false, 00:20:50.793 "compare_and_write": false, 00:20:50.793 "abort": false, 00:20:50.793 "seek_hole": false, 00:20:50.793 "seek_data": false, 00:20:50.793 "copy": false, 00:20:50.793 "nvme_iov_md": false 00:20:50.793 }, 00:20:50.793 "driver_specific": { 00:20:50.793 "ftl": { 00:20:50.793 "base_bdev": "35856be3-973f-403e-b966-f691cf3cd24d", 00:20:50.793 "cache": "nvc0n1p0" 00:20:50.793 } 00:20:50.793 } 00:20:50.793 } 00:20:50.793 ]' 00:20:50.793 22:11:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:50.793 22:11:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:50.793 22:11:23 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:50.793 [2024-12-06 22:11:23.580760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.580798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:50.793 [2024-12-06 22:11:23.580814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.793 [2024-12-06 22:11:23.580827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.580877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:50.793 [2024-12-06 22:11:23.583591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.583619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:50.793 [2024-12-06 22:11:23.583636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:20:50.793 [2024-12-06 22:11:23.583646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.584186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.584201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:50.793 [2024-12-06 22:11:23.584212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:20:50.793 [2024-12-06 22:11:23.584219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.587862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.587879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:50.793 [2024-12-06 22:11:23.587890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:20:50.793 [2024-12-06 22:11:23.587899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.595026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.595051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:50.793 [2024-12-06 22:11:23.595063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.077 ms 00:20:50.793 [2024-12-06 22:11:23.595071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.618014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.618043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:50.793 [2024-12-06 22:11:23.618059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.857 ms 00:20:50.793 [2024-12-06 22:11:23.618066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.633498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.633529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:50.793 [2024-12-06 22:11:23.633544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.370 ms 00:20:50.793 [2024-12-06 22:11:23.633555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.633759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.633781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:50.793 [2024-12-06 22:11:23.633792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:50.793 [2024-12-06 22:11:23.633800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.793 [2024-12-06 22:11:23.657337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.793 [2024-12-06 22:11:23.657365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:50.793 [2024-12-06 22:11:23.657377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.504 ms 00:20:50.793 [2024-12-06 22:11:23.657385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.054 [2024-12-06 22:11:23.680789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.055 [2024-12-06 22:11:23.680914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.055 [2024-12-06 22:11:23.680978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.340 ms 00:20:51.055 [2024-12-06 22:11:23.681022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.055 [2024-12-06 22:11:23.703195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.055 [2024-12-06 22:11:23.703300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.055 [2024-12-06 22:11:23.703355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.103 ms 00:20:51.055 [2024-12-06 22:11:23.703377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.055 [2024-12-06 22:11:23.726350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.055 [2024-12-06 22:11:23.726451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.055 [2024-12-06 22:11:23.726505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.841 ms 00:20:51.055 [2024-12-06 22:11:23.726528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.055 [2024-12-06 22:11:23.726638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.055 [2024-12-06 22:11:23.726698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.726951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.727936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.728911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.055 [2024-12-06 22:11:23.729134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.056 [2024-12-06 22:11:23.729492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.056 [2024-12-06 22:11:23.729504] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:20:51.056 [2024-12-06 22:11:23.729511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:51.056 [2024-12-06 22:11:23.729520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:51.056 [2024-12-06 22:11:23.729527] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:51.056 [2024-12-06 22:11:23.729539] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:51.056 [2024-12-06 22:11:23.729546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.056 [2024-12-06 22:11:23.729555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.056 [2024-12-06 22:11:23.729563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.056 [2024-12-06 22:11:23.729570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.056 [2024-12-06 22:11:23.729577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.056 [2024-12-06 22:11:23.729586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.056 [2024-12-06 22:11:23.729594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.056 [2024-12-06 22:11:23.729606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:20:51.056 [2024-12-06 22:11:23.729613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.742674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.056 [2024-12-06 22:11:23.742768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.056 [2024-12-06 22:11:23.742835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.009 ms 00:20:51.056 [2024-12-06 22:11:23.742878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.743302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.056 [2024-12-06 22:11:23.743390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.056 [2024-12-06 22:11:23.743441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:20:51.056 [2024-12-06 22:11:23.743508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.789509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.056 [2024-12-06 22:11:23.789612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.056 [2024-12-06 22:11:23.789664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.056 [2024-12-06 22:11:23.789687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.789796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.056 [2024-12-06 22:11:23.789890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.056 [2024-12-06 22:11:23.789940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.056 [2024-12-06 22:11:23.789975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.790050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.056 [2024-12-06 22:11:23.790073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.056 [2024-12-06 22:11:23.790099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.056 [2024-12-06 22:11:23.790166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.790226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.056 [2024-12-06 22:11:23.790246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.056 [2024-12-06 22:11:23.790268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.056 [2024-12-06 22:11:23.790357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.056 [2024-12-06 22:11:23.875495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.056 [2024-12-06 22:11:23.875633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.056 [2024-12-06 22:11:23.875684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.056 [2024-12-06 22:11:23.875707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.941109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.941275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.316 [2024-12-06 22:11:23.941337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.941363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.941479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.941546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.316 [2024-12-06 22:11:23.941575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.941597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.941710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.941773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.316 [2024-12-06 22:11:23.941824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.941847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.941991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.942096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.316 [2024-12-06 22:11:23.942124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.942146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.942239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.942264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:51.316 [2024-12-06 22:11:23.942341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.942360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.942430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.942452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.316 [2024-12-06 22:11:23.942511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.942533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.942644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.316 [2024-12-06 22:11:23.942671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.316 [2024-12-06 22:11:23.942692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.316 [2024-12-06 22:11:23.942807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.316 [2024-12-06 22:11:23.943029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.240 ms, result 0 00:20:51.316 true 00:20:51.316 22:11:23 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76440 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76440 ']' 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76440 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76440 00:20:51.316 killing process with pid 76440 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76440' 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76440 00:20:51.316 22:11:23 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76440 00:20:57.912 22:11:30 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:58.855 65536+0 records in 00:20:58.855 65536+0 records out 00:20:58.855 268435456 bytes (268 MB, 256 MiB) copied, 1.11925 s, 240 MB/s 00:20:58.855 22:11:31 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:58.855 [2024-12-06 22:11:31.561883] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:20:58.855 [2024-12-06 22:11:31.562278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76629 ] 00:20:59.116 [2024-12-06 22:11:31.726423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.116 [2024-12-06 22:11:31.848915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.378 [2024-12-06 22:11:32.147584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:59.378 [2024-12-06 22:11:32.147673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:59.641 [2024-12-06 22:11:32.311169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.311421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:59.641 [2024-12-06 22:11:32.311446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:59.641 [2024-12-06 22:11:32.311456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.314564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.314744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.641 [2024-12-06 22:11:32.314764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:20:59.641 [2024-12-06 22:11:32.314773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.315472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:59.641 [2024-12-06 22:11:32.316448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:59.641 [2024-12-06 22:11:32.316494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.316504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.641 [2024-12-06 22:11:32.316515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:20:59.641 [2024-12-06 22:11:32.316524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.318462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:59.641 [2024-12-06 22:11:32.332643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.332838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:59.641 [2024-12-06 22:11:32.332863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.184 ms 00:20:59.641 [2024-12-06 22:11:32.332872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.333068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.333096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:59.641 [2024-12-06 22:11:32.333108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:59.641 [2024-12-06 22:11:32.333116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.341501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.341549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.641 [2024-12-06 22:11:32.341561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.337 ms 00:20:59.641 [2024-12-06 22:11:32.341569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.341676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.341686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.641 [2024-12-06 22:11:32.341696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:59.641 [2024-12-06 22:11:32.341704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.341736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.341745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:59.641 [2024-12-06 22:11:32.341753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:59.641 [2024-12-06 22:11:32.341760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.341783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:59.641 [2024-12-06 22:11:32.346018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.346057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.641 [2024-12-06 22:11:32.346068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.241 ms 00:20:59.641 [2024-12-06 22:11:32.346076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.346155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.346165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:59.641 [2024-12-06 22:11:32.346195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:59.641 [2024-12-06 22:11:32.346203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.346230] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:59.641 [2024-12-06 22:11:32.346254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:59.641 [2024-12-06 22:11:32.346291] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:59.641 [2024-12-06 22:11:32.346307] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:59.641 [2024-12-06 22:11:32.346413] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:59.641 [2024-12-06 22:11:32.346425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:59.641 [2024-12-06 22:11:32.346436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:59.641 [2024-12-06 22:11:32.346450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:59.641 [2024-12-06 22:11:32.346459] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:59.641 [2024-12-06 22:11:32.346467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:59.641 [2024-12-06 22:11:32.346476] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:59.641 [2024-12-06 22:11:32.346484] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:59.641 [2024-12-06 22:11:32.346492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:59.641 [2024-12-06 22:11:32.346501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.346508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:59.641 [2024-12-06 22:11:32.346516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:59.641 [2024-12-06 22:11:32.346523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.346613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.641 [2024-12-06 22:11:32.346624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:59.641 [2024-12-06 22:11:32.346633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:59.641 [2024-12-06 22:11:32.346640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.641 [2024-12-06 22:11:32.346744] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:59.641 [2024-12-06 22:11:32.346754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:59.641 [2024-12-06 22:11:32.346763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.641 [2024-12-06 22:11:32.346771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.641 [2024-12-06 22:11:32.346779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:59.641 [2024-12-06 22:11:32.346785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:59.641 [2024-12-06 22:11:32.346792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:59.641 [2024-12-06 22:11:32.346798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:59.641 [2024-12-06 22:11:32.346805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:59.641 [2024-12-06 22:11:32.346812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.641 [2024-12-06 22:11:32.346819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:59.642 [2024-12-06 22:11:32.346840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:59.642 [2024-12-06 22:11:32.346847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.642 [2024-12-06 22:11:32.346855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:59.642 [2024-12-06 22:11:32.346862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:59.642 [2024-12-06 22:11:32.346868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:59.642 [2024-12-06 22:11:32.346885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:59.642 [2024-12-06 22:11:32.346892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:59.642 [2024-12-06 22:11:32.346906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.642 [2024-12-06 22:11:32.346920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:59.642 [2024-12-06 22:11:32.346926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.642 [2024-12-06 22:11:32.346939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:59.642 [2024-12-06 22:11:32.346946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.642 [2024-12-06 22:11:32.346959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:59.642 [2024-12-06 22:11:32.346965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.642 [2024-12-06 22:11:32.346978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:59.642 [2024-12-06 22:11:32.346984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:59.642 [2024-12-06 22:11:32.346991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.642 [2024-12-06 22:11:32.346998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:59.642 [2024-12-06 22:11:32.347005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:59.642 [2024-12-06 22:11:32.347012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.642 [2024-12-06 22:11:32.347018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:59.642 [2024-12-06 22:11:32.347025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:59.642 [2024-12-06 22:11:32.347033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.642 [2024-12-06 22:11:32.347040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:59.642 [2024-12-06 22:11:32.347047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:59.642 [2024-12-06 22:11:32.347053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.642 [2024-12-06 22:11:32.347060] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:59.642 [2024-12-06 22:11:32.347068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:59.642 [2024-12-06 22:11:32.347078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.642 [2024-12-06 22:11:32.347085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.642 [2024-12-06 22:11:32.347093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:59.642 [2024-12-06 22:11:32.347101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:59.642 [2024-12-06 22:11:32.347108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:59.642 [2024-12-06 22:11:32.347115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:59.642 [2024-12-06 22:11:32.347121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:59.642 [2024-12-06 22:11:32.347128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:59.642 [2024-12-06 22:11:32.347137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:59.642 [2024-12-06 22:11:32.347147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:59.642 [2024-12-06 22:11:32.347163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:59.642 [2024-12-06 22:11:32.347205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:59.642 [2024-12-06 22:11:32.347214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:59.642 [2024-12-06 22:11:32.347223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:59.642 [2024-12-06 22:11:32.347231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:59.642 [2024-12-06 22:11:32.347237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:59.642 [2024-12-06 22:11:32.347245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:59.642 [2024-12-06 22:11:32.347252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:59.642 [2024-12-06 22:11:32.347260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:59.642 [2024-12-06 22:11:32.347296] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:59.642 [2024-12-06 22:11:32.347305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:59.642 [2024-12-06 22:11:32.347320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:59.642 [2024-12-06 22:11:32.347327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:59.642 [2024-12-06 22:11:32.347334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:59.642 [2024-12-06 22:11:32.347342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.347353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:59.642 [2024-12-06 22:11:32.347361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:20:59.642 [2024-12-06 22:11:32.347368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.379731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.379934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.642 [2024-12-06 22:11:32.379954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.302 ms 00:20:59.642 [2024-12-06 22:11:32.379963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.380128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.380140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:59.642 [2024-12-06 22:11:32.380150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:59.642 [2024-12-06 22:11:32.380158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.427167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.427231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.642 [2024-12-06 22:11:32.427249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.962 ms 00:20:59.642 [2024-12-06 22:11:32.427258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.427373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.427385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.642 [2024-12-06 22:11:32.427394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.642 [2024-12-06 22:11:32.427402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.427932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.427954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.642 [2024-12-06 22:11:32.427974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:20:59.642 [2024-12-06 22:11:32.427983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.428201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.428213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.642 [2024-12-06 22:11:32.428223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:20:59.642 [2024-12-06 22:11:32.428231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.444600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.444644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.642 [2024-12-06 22:11:32.444655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.345 ms 00:20:59.642 [2024-12-06 22:11:32.444664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.642 [2024-12-06 22:11:32.459108] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:59.642 [2024-12-06 22:11:32.459163] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:59.642 [2024-12-06 22:11:32.459194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.642 [2024-12-06 22:11:32.459204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:59.643 [2024-12-06 22:11:32.459214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.415 ms 00:20:59.643 [2024-12-06 22:11:32.459221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.643 [2024-12-06 22:11:32.484854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.643 [2024-12-06 22:11:32.484905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:59.643 [2024-12-06 22:11:32.484918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.532 ms 00:20:59.643 [2024-12-06 22:11:32.484927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.643 [2024-12-06 22:11:32.497861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.643 [2024-12-06 22:11:32.497908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:59.643 [2024-12-06 22:11:32.497920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:20:59.643 [2024-12-06 22:11:32.497927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.510893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.510940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:59.905 [2024-12-06 22:11:32.510952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.874 ms 00:20:59.905 [2024-12-06 22:11:32.510959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.511631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.511660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:59.905 [2024-12-06 22:11:32.511671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:59.905 [2024-12-06 22:11:32.511679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.578094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.578162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:59.905 [2024-12-06 22:11:32.578213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.387 ms 00:20:59.905 [2024-12-06 22:11:32.578224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.589282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:59.905 [2024-12-06 22:11:32.609536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.609593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:59.905 [2024-12-06 22:11:32.609607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.206 ms 00:20:59.905 [2024-12-06 22:11:32.609616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.609721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.609733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:59.905 [2024-12-06 22:11:32.609744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:59.905 [2024-12-06 22:11:32.609753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.609816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.609826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:59.905 [2024-12-06 22:11:32.609835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:59.905 [2024-12-06 22:11:32.609843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.609877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.609889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:59.905 [2024-12-06 22:11:32.609897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:59.905 [2024-12-06 22:11:32.609906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.609945] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:59.905 [2024-12-06 22:11:32.609956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.609964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:59.905 [2024-12-06 22:11:32.609973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:59.905 [2024-12-06 22:11:32.609981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.636889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.636941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:59.905 [2024-12-06 22:11:32.636955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.885 ms 00:20:59.905 [2024-12-06 22:11:32.636964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.637095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-12-06 22:11:32.637108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:59.905 [2024-12-06 22:11:32.637120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:59.905 [2024-12-06 22:11:32.637128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-12-06 22:11:32.638242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.905 [2024-12-06 22:11:32.641891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.720 ms, result 0 00:20:59.905 [2024-12-06 22:11:32.643110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.905 [2024-12-06 22:11:32.656899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.851  [2024-12-06T22:11:34.667Z] Copying: 17/256 [MB] (17 MBps) [2024-12-06T22:11:36.053Z] Copying: 53/256 [MB] (36 MBps) [2024-12-06T22:11:36.995Z] Copying: 95/256 [MB] (41 MBps) [2024-12-06T22:11:37.937Z] Copying: 137/256 [MB] (41 MBps) [2024-12-06T22:11:38.879Z] Copying: 161/256 [MB] (24 MBps) [2024-12-06T22:11:39.821Z] Copying: 174/256 [MB] (12 MBps) [2024-12-06T22:11:40.764Z] Copying: 198/256 [MB] (24 MBps) [2024-12-06T22:11:41.707Z] Copying: 211/256 [MB] (12 MBps) [2024-12-06T22:11:42.651Z] Copying: 224/256 [MB] (13 MBps) [2024-12-06T22:11:42.651Z] Copying: 256/256 [MB] (average 26 MBps)[2024-12-06 22:11:42.437638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.779 [2024-12-06 22:11:42.444892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.779 [2024-12-06 22:11:42.444923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.779 [2024-12-06 22:11:42.444934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:09.779 [2024-12-06 22:11:42.444945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.779 [2024-12-06 22:11:42.444962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:09.779 [2024-12-06 22:11:42.447069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.779 [2024-12-06 22:11:42.447093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.779 [2024-12-06 22:11:42.447101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:21:09.779 [2024-12-06 22:11:42.447107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.779 [2024-12-06 22:11:42.448566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.779 [2024-12-06 22:11:42.448591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.779 [2024-12-06 22:11:42.448599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:21:09.779 [2024-12-06 22:11:42.448604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.779 [2024-12-06 22:11:42.453958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.779 [2024-12-06 22:11:42.453988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.779 [2024-12-06 22:11:42.453996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:21:09.779 [2024-12-06 22:11:42.454002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.779 [2024-12-06 22:11:42.459498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.779 [2024-12-06 22:11:42.459522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.780 [2024-12-06 22:11:42.459530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.470 ms 00:21:09.780 [2024-12-06 22:11:42.459537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.477402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.477515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.780 [2024-12-06 22:11:42.477529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.823 ms 00:21:09.780 [2024-12-06 22:11:42.477535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.488969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.489077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.780 [2024-12-06 22:11:42.489093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.408 ms 00:21:09.780 [2024-12-06 22:11:42.489099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.489196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.489203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.780 [2024-12-06 22:11:42.489210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:09.780 [2024-12-06 22:11:42.489221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.506894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.506919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.780 [2024-12-06 22:11:42.506927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.661 ms 00:21:09.780 [2024-12-06 22:11:42.506933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.524392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.524416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.780 [2024-12-06 22:11:42.524424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.423 ms 00:21:09.780 [2024-12-06 22:11:42.524430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.541421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.541445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.780 [2024-12-06 22:11:42.541452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.963 ms 00:21:09.780 [2024-12-06 22:11:42.541458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.558200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.780 [2024-12-06 22:11:42.558301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.780 [2024-12-06 22:11:42.558313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.695 ms 00:21:09.780 [2024-12-06 22:11:42.558318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.780 [2024-12-06 22:11:42.558343] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.780 [2024-12-06 22:11:42.558353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.780 [2024-12-06 22:11:42.558706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.781 [2024-12-06 22:11:42.558924] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.781 [2024-12-06 22:11:42.558930] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:09.781 [2024-12-06 22:11:42.558936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.781 [2024-12-06 22:11:42.558941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.781 [2024-12-06 22:11:42.558947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.781 [2024-12-06 22:11:42.558953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.781 [2024-12-06 22:11:42.558958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.781 [2024-12-06 22:11:42.558964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.781 [2024-12-06 22:11:42.558970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.781 [2024-12-06 22:11:42.558975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.781 [2024-12-06 22:11:42.558979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.781 [2024-12-06 22:11:42.558985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.781 [2024-12-06 22:11:42.558992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.781 [2024-12-06 22:11:42.558998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:21:09.781 [2024-12-06 22:11:42.559003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.568414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.781 [2024-12-06 22:11:42.568439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.781 [2024-12-06 22:11:42.568447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.398 ms 00:21:09.781 [2024-12-06 22:11:42.568452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.568728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.781 [2024-12-06 22:11:42.568739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.781 [2024-12-06 22:11:42.568745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:21:09.781 [2024-12-06 22:11:42.568751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.596516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.781 [2024-12-06 22:11:42.596544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.781 [2024-12-06 22:11:42.596552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.781 [2024-12-06 22:11:42.596558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.596614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.781 [2024-12-06 22:11:42.596621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.781 [2024-12-06 22:11:42.596627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.781 [2024-12-06 22:11:42.596632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.596667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.781 [2024-12-06 22:11:42.596674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.781 [2024-12-06 22:11:42.596680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.781 [2024-12-06 22:11:42.596685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.781 [2024-12-06 22:11:42.596698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.781 [2024-12-06 22:11:42.596706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.781 [2024-12-06 22:11:42.596712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.781 [2024-12-06 22:11:42.596717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.655889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.655921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.042 [2024-12-06 22:11:42.655929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.655935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.703738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.703771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.042 [2024-12-06 22:11:42.703779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.703785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.703826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.703832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.042 [2024-12-06 22:11:42.703839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.703845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.703867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.703874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.042 [2024-12-06 22:11:42.703884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.703889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.703957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.703964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.042 [2024-12-06 22:11:42.703971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.703977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.704000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.704007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:10.042 [2024-12-06 22:11:42.704013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.704028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.704057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.704064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.042 [2024-12-06 22:11:42.704070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.704075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.704110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.042 [2024-12-06 22:11:42.704117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.042 [2024-12-06 22:11:42.704125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.042 [2024-12-06 22:11:42.704131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.042 [2024-12-06 22:11:42.704252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.350 ms, result 0 00:21:10.982 00:21:10.982 00:21:10.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.982 22:11:43 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76760 00:21:10.982 22:11:43 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76760 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76760 ']' 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.982 22:11:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:10.982 22:11:43 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:10.982 [2024-12-06 22:11:43.687882] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:10.982 [2024-12-06 22:11:43.688013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:21:10.982 [2024-12-06 22:11:43.849858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.243 [2024-12-06 22:11:43.965568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.815 22:11:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.815 22:11:44 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:11.815 22:11:44 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:12.076 [2024-12-06 22:11:44.867575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.076 [2024-12-06 22:11:44.867660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.339 [2024-12-06 22:11:45.048793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.049042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.339 [2024-12-06 22:11:45.049071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.339 [2024-12-06 22:11:45.049081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.052095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.052276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.339 [2024-12-06 22:11:45.052301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:21:12.339 [2024-12-06 22:11:45.052309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.052438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.339 [2024-12-06 22:11:45.053147] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.339 [2024-12-06 22:11:45.053188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.053197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.339 [2024-12-06 22:11:45.053209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:21:12.339 [2024-12-06 22:11:45.053218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.054892] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.339 [2024-12-06 22:11:45.069109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.069159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.339 [2024-12-06 22:11:45.069186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.223 ms 00:21:12.339 [2024-12-06 22:11:45.069198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.069307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.069322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.339 [2024-12-06 22:11:45.069332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:12.339 [2024-12-06 22:11:45.069342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.077133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.077196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.339 [2024-12-06 22:11:45.077207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.736 ms 00:21:12.339 [2024-12-06 22:11:45.077216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.077327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.077341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.339 [2024-12-06 22:11:45.077350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:12.339 [2024-12-06 22:11:45.077362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.077386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.077397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.339 [2024-12-06 22:11:45.077405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:12.339 [2024-12-06 22:11:45.077414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.077437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:12.339 [2024-12-06 22:11:45.081452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.081487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.339 [2024-12-06 22:11:45.081500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:21:12.339 [2024-12-06 22:11:45.081508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.081596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.339 [2024-12-06 22:11:45.081605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.339 [2024-12-06 22:11:45.081616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:12.339 [2024-12-06 22:11:45.081626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.339 [2024-12-06 22:11:45.081650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.339 [2024-12-06 22:11:45.081672] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.339 [2024-12-06 22:11:45.081720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.339 [2024-12-06 22:11:45.081736] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:12.339 [2024-12-06 22:11:45.081845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.339 [2024-12-06 22:11:45.081857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.340 [2024-12-06 22:11:45.081873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:12.340 [2024-12-06 22:11:45.081883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.340 [2024-12-06 22:11:45.081895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.340 [2024-12-06 22:11:45.081903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:12.340 [2024-12-06 22:11:45.081912] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.340 [2024-12-06 22:11:45.081920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.340 [2024-12-06 22:11:45.081931] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.340 [2024-12-06 22:11:45.081939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.340 [2024-12-06 22:11:45.081949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.340 [2024-12-06 22:11:45.081957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:21:12.340 [2024-12-06 22:11:45.081966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.340 [2024-12-06 22:11:45.082056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.340 [2024-12-06 22:11:45.082066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.340 [2024-12-06 22:11:45.082074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:12.340 [2024-12-06 22:11:45.082083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.340 [2024-12-06 22:11:45.082208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.340 [2024-12-06 22:11:45.082221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.340 [2024-12-06 22:11:45.082230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.340 [2024-12-06 22:11:45.082259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.340 [2024-12-06 22:11:45.082286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.340 [2024-12-06 22:11:45.082302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.340 [2024-12-06 22:11:45.082311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:12.340 [2024-12-06 22:11:45.082317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.340 [2024-12-06 22:11:45.082326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.340 [2024-12-06 22:11:45.082333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:12.340 [2024-12-06 22:11:45.082341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.340 [2024-12-06 22:11:45.082359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.340 [2024-12-06 22:11:45.082388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.340 [2024-12-06 22:11:45.082415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.340 [2024-12-06 22:11:45.082437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.340 [2024-12-06 22:11:45.082473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.340 [2024-12-06 22:11:45.082495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.340 [2024-12-06 22:11:45.082510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.340 [2024-12-06 22:11:45.082519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:12.340 [2024-12-06 22:11:45.082525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.340 [2024-12-06 22:11:45.082533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.340 [2024-12-06 22:11:45.082540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:12.340 [2024-12-06 22:11:45.082551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.340 [2024-12-06 22:11:45.082566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:12.340 [2024-12-06 22:11:45.082573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082581] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.340 [2024-12-06 22:11:45.082591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.340 [2024-12-06 22:11:45.082601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.340 [2024-12-06 22:11:45.082617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.340 [2024-12-06 22:11:45.082624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.340 [2024-12-06 22:11:45.082636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.340 [2024-12-06 22:11:45.082643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.340 [2024-12-06 22:11:45.082653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.340 [2024-12-06 22:11:45.082660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.340 [2024-12-06 22:11:45.082670] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.340 [2024-12-06 22:11:45.082679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:12.340 [2024-12-06 22:11:45.082701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:12.340 [2024-12-06 22:11:45.082710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:12.340 [2024-12-06 22:11:45.082717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:12.340 [2024-12-06 22:11:45.082726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:12.340 [2024-12-06 22:11:45.082734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:12.340 [2024-12-06 22:11:45.082743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:12.340 [2024-12-06 22:11:45.082750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:12.340 [2024-12-06 22:11:45.082759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:12.340 [2024-12-06 22:11:45.082768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:12.340 [2024-12-06 22:11:45.082813] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.340 [2024-12-06 22:11:45.082821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.340 [2024-12-06 22:11:45.082841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.340 [2024-12-06 22:11:45.082850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.340 [2024-12-06 22:11:45.082858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.340 [2024-12-06 22:11:45.082867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.340 [2024-12-06 22:11:45.082875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.340 [2024-12-06 22:11:45.082885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:21:12.340 [2024-12-06 22:11:45.082894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.340 [2024-12-06 22:11:45.115610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.340 [2024-12-06 22:11:45.115803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.340 [2024-12-06 22:11:45.115827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.655 ms 00:21:12.340 [2024-12-06 22:11:45.115840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.340 [2024-12-06 22:11:45.115978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.115989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.341 [2024-12-06 22:11:45.116000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:12.341 [2024-12-06 22:11:45.116008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.151077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.151272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.341 [2024-12-06 22:11:45.151296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.018 ms 00:21:12.341 [2024-12-06 22:11:45.151305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.151397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.151408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.341 [2024-12-06 22:11:45.151418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:12.341 [2024-12-06 22:11:45.151426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.151933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.151954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.341 [2024-12-06 22:11:45.151966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:21:12.341 [2024-12-06 22:11:45.151974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.152156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.152166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.341 [2024-12-06 22:11:45.152196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:21:12.341 [2024-12-06 22:11:45.152204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.169781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.169824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.341 [2024-12-06 22:11:45.169838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.550 ms 00:21:12.341 [2024-12-06 22:11:45.169847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.341 [2024-12-06 22:11:45.192375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:12.341 [2024-12-06 22:11:45.192430] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.341 [2024-12-06 22:11:45.192448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.341 [2024-12-06 22:11:45.192458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.341 [2024-12-06 22:11:45.192471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.482 ms 00:21:12.341 [2024-12-06 22:11:45.192486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.218750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.218934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.601 [2024-12-06 22:11:45.218961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:21:12.601 [2024-12-06 22:11:45.218970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.231592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.231637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.601 [2024-12-06 22:11:45.231655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.533 ms 00:21:12.601 [2024-12-06 22:11:45.231663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.243923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.243964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.601 [2024-12-06 22:11:45.243979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.172 ms 00:21:12.601 [2024-12-06 22:11:45.243987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.244667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.244692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.601 [2024-12-06 22:11:45.244704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:12.601 [2024-12-06 22:11:45.244712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.309469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.309715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.601 [2024-12-06 22:11:45.309746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.726 ms 00:21:12.601 [2024-12-06 22:11:45.309756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.321452] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:12.601 [2024-12-06 22:11:45.340186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.340377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.601 [2024-12-06 22:11:45.340399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.320 ms 00:21:12.601 [2024-12-06 22:11:45.340411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.340505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.340519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.601 [2024-12-06 22:11:45.340528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:12.601 [2024-12-06 22:11:45.340538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.340595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.340607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.601 [2024-12-06 22:11:45.340615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:12.601 [2024-12-06 22:11:45.340628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.340654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.340665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.601 [2024-12-06 22:11:45.340673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.601 [2024-12-06 22:11:45.340685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.340721] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.601 [2024-12-06 22:11:45.340734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.340745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.601 [2024-12-06 22:11:45.340755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:12.601 [2024-12-06 22:11:45.340763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.366277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.366326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.601 [2024-12-06 22:11:45.366342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:21:12.601 [2024-12-06 22:11:45.366350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.366482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.601 [2024-12-06 22:11:45.366494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.601 [2024-12-06 22:11:45.366507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:12.601 [2024-12-06 22:11:45.366517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.601 [2024-12-06 22:11:45.367562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.601 [2024-12-06 22:11:45.371096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 318.424 ms, result 0 00:21:12.601 [2024-12-06 22:11:45.373439] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:12.601 Some configs were skipped because the RPC state that can call them passed over. 00:21:12.601 22:11:45 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:12.861 [2024-12-06 22:11:45.618250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.861 [2024-12-06 22:11:45.618454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:12.861 [2024-12-06 22:11:45.618525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:21:12.861 [2024-12-06 22:11:45.618554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.861 [2024-12-06 22:11:45.618612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.699 ms, result 0 00:21:12.861 true 00:21:12.861 22:11:45 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:13.122 [2024-12-06 22:11:45.824244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.122 [2024-12-06 22:11:45.824384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:13.122 [2024-12-06 22:11:45.824436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:21:13.122 [2024-12-06 22:11:45.824454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.122 [2024-12-06 22:11:45.824501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.374 ms, result 0 00:21:13.122 true 00:21:13.122 22:11:45 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76760 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76760 ']' 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76760 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76760 00:21:13.122 killing process with pid 76760 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76760' 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76760 00:21:13.122 22:11:45 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76760 00:21:13.694 [2024-12-06 22:11:46.414450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.414494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:13.694 [2024-12-06 22:11:46.414505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:13.694 [2024-12-06 22:11:46.414513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.414532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:13.694 [2024-12-06 22:11:46.416703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.416727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:13.694 [2024-12-06 22:11:46.416738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:21:13.694 [2024-12-06 22:11:46.416744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.416982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.416990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:13.694 [2024-12-06 22:11:46.416998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:13.694 [2024-12-06 22:11:46.417004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.420428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.420554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:13.694 [2024-12-06 22:11:46.420572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:21:13.694 [2024-12-06 22:11:46.420578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.425990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.426014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:13.694 [2024-12-06 22:11:46.426025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.382 ms 00:21:13.694 [2024-12-06 22:11:46.426030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.433472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.433502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:13.694 [2024-12-06 22:11:46.433512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.386 ms 00:21:13.694 [2024-12-06 22:11:46.433517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.440181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.440207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:13.694 [2024-12-06 22:11:46.440216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.634 ms 00:21:13.694 [2024-12-06 22:11:46.440223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.440329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.440337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:13.694 [2024-12-06 22:11:46.440345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:13.694 [2024-12-06 22:11:46.440351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.448014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.448119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:13.694 [2024-12-06 22:11:46.448133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.646 ms 00:21:13.694 [2024-12-06 22:11:46.448138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.455339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.455362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:13.694 [2024-12-06 22:11:46.455374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.173 ms 00:21:13.694 [2024-12-06 22:11:46.455379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.462534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.462619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:13.694 [2024-12-06 22:11:46.462633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.119 ms 00:21:13.694 [2024-12-06 22:11:46.462639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.469538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-12-06 22:11:46.469625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:13.694 [2024-12-06 22:11:46.469638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:21:13.694 [2024-12-06 22:11:46.469643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-12-06 22:11:46.469668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:13.694 [2024-12-06 22:11:46.469678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:13.694 [2024-12-06 22:11:46.469852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.469995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:13.695 [2024-12-06 22:11:46.470343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:13.695 [2024-12-06 22:11:46.470353] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:13.695 [2024-12-06 22:11:46.470361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:13.695 [2024-12-06 22:11:46.470368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:13.695 [2024-12-06 22:11:46.470373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:13.695 [2024-12-06 22:11:46.470380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:13.695 [2024-12-06 22:11:46.470386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:13.695 [2024-12-06 22:11:46.470393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:13.695 [2024-12-06 22:11:46.470399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:13.695 [2024-12-06 22:11:46.470405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:13.695 [2024-12-06 22:11:46.470410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:13.695 [2024-12-06 22:11:46.470416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-12-06 22:11:46.470422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:13.695 [2024-12-06 22:11:46.470429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:21:13.695 [2024-12-06 22:11:46.470435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-12-06 22:11:46.479948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-12-06 22:11:46.479971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:13.696 [2024-12-06 22:11:46.479982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.495 ms 00:21:13.696 [2024-12-06 22:11:46.479988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.696 [2024-12-06 22:11:46.480289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.696 [2024-12-06 22:11:46.480369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:13.696 [2024-12-06 22:11:46.480383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:21:13.696 [2024-12-06 22:11:46.480389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.696 [2024-12-06 22:11:46.515232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.696 [2024-12-06 22:11:46.515320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.696 [2024-12-06 22:11:46.515334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.696 [2024-12-06 22:11:46.515340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.696 [2024-12-06 22:11:46.515414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.696 [2024-12-06 22:11:46.515421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.696 [2024-12-06 22:11:46.515430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.696 [2024-12-06 22:11:46.515436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.696 [2024-12-06 22:11:46.515469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.696 [2024-12-06 22:11:46.515476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.696 [2024-12-06 22:11:46.515485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.696 [2024-12-06 22:11:46.515490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.696 [2024-12-06 22:11:46.515505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.696 [2024-12-06 22:11:46.515511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.696 [2024-12-06 22:11:46.515518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.696 [2024-12-06 22:11:46.515525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.575573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.575603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.957 [2024-12-06 22:11:46.575612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.575618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.623996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.957 [2024-12-06 22:11:46.624042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:13.957 [2024-12-06 22:11:46.624125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:13.957 [2024-12-06 22:11:46.624169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:13.957 [2024-12-06 22:11:46.624281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:13.957 [2024-12-06 22:11:46.624326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:13.957 [2024-12-06 22:11:46.624379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.957 [2024-12-06 22:11:46.624427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:13.957 [2024-12-06 22:11:46.624435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.957 [2024-12-06 22:11:46.624441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-12-06 22:11:46.624547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.080 ms, result 0 00:21:14.528 22:11:47 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:14.528 22:11:47 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.528 [2024-12-06 22:11:47.204061] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:14.528 [2024-12-06 22:11:47.204194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76807 ] 00:21:14.528 [2024-12-06 22:11:47.361641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.789 [2024-12-06 22:11:47.442714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.789 [2024-12-06 22:11:47.650561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:14.789 [2024-12-06 22:11:47.650612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:15.149 [2024-12-06 22:11:47.801981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.149 [2024-12-06 22:11:47.802014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:15.149 [2024-12-06 22:11:47.802024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:15.149 [2024-12-06 22:11:47.802030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.149 [2024-12-06 22:11:47.804098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.149 [2024-12-06 22:11:47.804226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.149 [2024-12-06 22:11:47.804240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:21:15.149 [2024-12-06 22:11:47.804246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.149 [2024-12-06 22:11:47.804298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:15.149 [2024-12-06 22:11:47.804809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:15.149 [2024-12-06 22:11:47.804820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.149 [2024-12-06 22:11:47.804826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.149 [2024-12-06 22:11:47.804833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:21:15.149 [2024-12-06 22:11:47.804839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.149 [2024-12-06 22:11:47.805796] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:15.149 [2024-12-06 22:11:47.815444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.815546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:15.150 [2024-12-06 22:11:47.815560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.649 ms 00:21:15.150 [2024-12-06 22:11:47.815566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.815634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.815642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:15.150 [2024-12-06 22:11:47.815649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:15.150 [2024-12-06 22:11:47.815654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.819899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.819921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.150 [2024-12-06 22:11:47.819928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:21:15.150 [2024-12-06 22:11:47.819934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.820000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.820007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.150 [2024-12-06 22:11:47.820013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:15.150 [2024-12-06 22:11:47.820027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.820046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.820053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:15.150 [2024-12-06 22:11:47.820059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:15.150 [2024-12-06 22:11:47.820065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.820082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:15.150 [2024-12-06 22:11:47.822669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.822764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.150 [2024-12-06 22:11:47.822776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:21:15.150 [2024-12-06 22:11:47.822781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.822816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.822823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:15.150 [2024-12-06 22:11:47.822829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:15.150 [2024-12-06 22:11:47.822840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.822855] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:15.150 [2024-12-06 22:11:47.822870] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:15.150 [2024-12-06 22:11:47.822904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:15.150 [2024-12-06 22:11:47.822918] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:15.150 [2024-12-06 22:11:47.822999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:15.150 [2024-12-06 22:11:47.823007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:15.150 [2024-12-06 22:11:47.823014] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:15.150 [2024-12-06 22:11:47.823024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:15.150 [2024-12-06 22:11:47.823042] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:15.150 [2024-12-06 22:11:47.823048] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:15.150 [2024-12-06 22:11:47.823053] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:15.150 [2024-12-06 22:11:47.823058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.823064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:15.150 [2024-12-06 22:11:47.823072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:15.150 [2024-12-06 22:11:47.823078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.823146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.150 [2024-12-06 22:11:47.823155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:15.150 [2024-12-06 22:11:47.823161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:15.150 [2024-12-06 22:11:47.823166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.150 [2024-12-06 22:11:47.823252] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:15.150 [2024-12-06 22:11:47.823259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:15.150 [2024-12-06 22:11:47.823266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:15.150 [2024-12-06 22:11:47.823283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:15.150 [2024-12-06 22:11:47.823300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.150 [2024-12-06 22:11:47.823310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:15.150 [2024-12-06 22:11:47.823320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:15.150 [2024-12-06 22:11:47.823325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.150 [2024-12-06 22:11:47.823330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:15.150 [2024-12-06 22:11:47.823335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:15.150 [2024-12-06 22:11:47.823340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:15.150 [2024-12-06 22:11:47.823349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:15.150 [2024-12-06 22:11:47.823364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:15.150 [2024-12-06 22:11:47.823369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.150 [2024-12-06 22:11:47.823373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:15.150 [2024-12-06 22:11:47.823378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.151 [2024-12-06 22:11:47.823388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:15.151 [2024-12-06 22:11:47.823393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.151 [2024-12-06 22:11:47.823402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:15.151 [2024-12-06 22:11:47.823409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.151 [2024-12-06 22:11:47.823419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:15.151 [2024-12-06 22:11:47.823424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.151 [2024-12-06 22:11:47.823434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:15.151 [2024-12-06 22:11:47.823439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:15.151 [2024-12-06 22:11:47.823444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.151 [2024-12-06 22:11:47.823449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:15.151 [2024-12-06 22:11:47.823454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:15.151 [2024-12-06 22:11:47.823458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:15.151 [2024-12-06 22:11:47.823468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:15.151 [2024-12-06 22:11:47.823473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823478] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:15.151 [2024-12-06 22:11:47.823484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:15.151 [2024-12-06 22:11:47.823490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.151 [2024-12-06 22:11:47.823496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.151 [2024-12-06 22:11:47.823501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:15.151 [2024-12-06 22:11:47.823506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:15.151 [2024-12-06 22:11:47.823511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:15.151 [2024-12-06 22:11:47.823516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:15.151 [2024-12-06 22:11:47.823521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:15.151 [2024-12-06 22:11:47.823526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:15.151 [2024-12-06 22:11:47.823533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:15.151 [2024-12-06 22:11:47.823539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:15.151 [2024-12-06 22:11:47.823551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:15.151 [2024-12-06 22:11:47.823556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:15.151 [2024-12-06 22:11:47.823562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:15.151 [2024-12-06 22:11:47.823567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:15.151 [2024-12-06 22:11:47.823572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:15.151 [2024-12-06 22:11:47.823578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:15.151 [2024-12-06 22:11:47.823583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:15.151 [2024-12-06 22:11:47.823588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:15.151 [2024-12-06 22:11:47.823594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:15.151 [2024-12-06 22:11:47.823620] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:15.151 [2024-12-06 22:11:47.823626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:15.151 [2024-12-06 22:11:47.823638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:15.151 [2024-12-06 22:11:47.823643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:15.151 [2024-12-06 22:11:47.823648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:15.151 [2024-12-06 22:11:47.823654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.151 [2024-12-06 22:11:47.823662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:15.151 [2024-12-06 22:11:47.823667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:21:15.151 [2024-12-06 22:11:47.823672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.151 [2024-12-06 22:11:47.844348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.151 [2024-12-06 22:11:47.844373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.151 [2024-12-06 22:11:47.844381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.636 ms 00:21:15.151 [2024-12-06 22:11:47.844387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.151 [2024-12-06 22:11:47.844482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.151 [2024-12-06 22:11:47.844490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:15.151 [2024-12-06 22:11:47.844496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:15.151 [2024-12-06 22:11:47.844502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.151 [2024-12-06 22:11:47.884777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.151 [2024-12-06 22:11:47.884891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.151 [2024-12-06 22:11:47.884908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.259 ms 00:21:15.151 [2024-12-06 22:11:47.884915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.151 [2024-12-06 22:11:47.884974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.884983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.152 [2024-12-06 22:11:47.884990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:15.152 [2024-12-06 22:11:47.884995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.885288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.885306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.152 [2024-12-06 22:11:47.885313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:15.152 [2024-12-06 22:11:47.885323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.885425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.885437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.152 [2024-12-06 22:11:47.885443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:15.152 [2024-12-06 22:11:47.885449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.896104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.896214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.152 [2024-12-06 22:11:47.896227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.639 ms 00:21:15.152 [2024-12-06 22:11:47.896233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.905933] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:15.152 [2024-12-06 22:11:47.905957] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:15.152 [2024-12-06 22:11:47.905966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.905973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:15.152 [2024-12-06 22:11:47.905979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.644 ms 00:21:15.152 [2024-12-06 22:11:47.905985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.924347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.924372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:15.152 [2024-12-06 22:11:47.924380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:21:15.152 [2024-12-06 22:11:47.924387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.933236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.933259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:15.152 [2024-12-06 22:11:47.933266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.797 ms 00:21:15.152 [2024-12-06 22:11:47.933272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.941590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.941612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:15.152 [2024-12-06 22:11:47.941619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.279 ms 00:21:15.152 [2024-12-06 22:11:47.941624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.942086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.942100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:15.152 [2024-12-06 22:11:47.942107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:21:15.152 [2024-12-06 22:11:47.942112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.985839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:47.985877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:15.152 [2024-12-06 22:11:47.985886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.710 ms 00:21:15.152 [2024-12-06 22:11:47.985893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:47.993537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:15.152 [2024-12-06 22:11:48.004798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:48.004825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:15.152 [2024-12-06 22:11:48.004834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.840 ms 00:21:15.152 [2024-12-06 22:11:48.004844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:48.004912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:48.004920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:15.152 [2024-12-06 22:11:48.004927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:15.152 [2024-12-06 22:11:48.004933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:48.004968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:48.004974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:15.152 [2024-12-06 22:11:48.004981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:15.152 [2024-12-06 22:11:48.004989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:48.005012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:48.005019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:15.152 [2024-12-06 22:11:48.005025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:15.152 [2024-12-06 22:11:48.005030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.152 [2024-12-06 22:11:48.005054] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:15.152 [2024-12-06 22:11:48.005061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.152 [2024-12-06 22:11:48.005066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:15.152 [2024-12-06 22:11:48.005072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:15.152 [2024-12-06 22:11:48.005077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.414 [2024-12-06 22:11:48.022679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.414 [2024-12-06 22:11:48.022715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:15.414 [2024-12-06 22:11:48.022724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.586 ms 00:21:15.414 [2024-12-06 22:11:48.022731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.414 [2024-12-06 22:11:48.022796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.414 [2024-12-06 22:11:48.022805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:15.414 [2024-12-06 22:11:48.022812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:15.414 [2024-12-06 22:11:48.022818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.414 [2024-12-06 22:11:48.023456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:15.414 [2024-12-06 22:11:48.025776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.255 ms, result 0 00:21:15.414 [2024-12-06 22:11:48.026435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:15.414 [2024-12-06 22:11:48.041364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.355  [2024-12-06T22:11:50.168Z] Copying: 18/256 [MB] (18 MBps) [2024-12-06T22:11:51.108Z] Copying: 28/256 [MB] (10 MBps) [2024-12-06T22:11:52.051Z] Copying: 53/256 [MB] (24 MBps) [2024-12-06T22:11:53.435Z] Copying: 70/256 [MB] (16 MBps) [2024-12-06T22:11:54.377Z] Copying: 81/256 [MB] (10 MBps) [2024-12-06T22:11:55.319Z] Copying: 96/256 [MB] (15 MBps) [2024-12-06T22:11:56.261Z] Copying: 112/256 [MB] (16 MBps) [2024-12-06T22:11:57.204Z] Copying: 127/256 [MB] (14 MBps) [2024-12-06T22:11:58.149Z] Copying: 145/256 [MB] (17 MBps) [2024-12-06T22:11:59.093Z] Copying: 163/256 [MB] (17 MBps) [2024-12-06T22:12:00.478Z] Copying: 185/256 [MB] (22 MBps) [2024-12-06T22:12:01.049Z] Copying: 206/256 [MB] (21 MBps) [2024-12-06T22:12:02.436Z] Copying: 220/256 [MB] (13 MBps) [2024-12-06T22:12:03.380Z] Copying: 230/256 [MB] (10 MBps) [2024-12-06T22:12:04.325Z] Copying: 241/256 [MB] (10 MBps) [2024-12-06T22:12:04.325Z] Copying: 254/256 [MB] (13 MBps) [2024-12-06T22:12:04.325Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-06 22:12:04.103396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.453 [2024-12-06 22:12:04.113966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.114189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.453 [2024-12-06 22:12:04.114224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:31.453 [2024-12-06 22:12:04.114233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.114265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:31.453 [2024-12-06 22:12:04.117341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.117519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.453 [2024-12-06 22:12:04.117540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:21:31.453 [2024-12-06 22:12:04.117549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.117824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.117836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.453 [2024-12-06 22:12:04.117845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:21:31.453 [2024-12-06 22:12:04.117855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.121582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.121609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.453 [2024-12-06 22:12:04.121620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:21:31.453 [2024-12-06 22:12:04.121628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.128607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.128781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.453 [2024-12-06 22:12:04.128800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.960 ms 00:21:31.453 [2024-12-06 22:12:04.128809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.154758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.154805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.453 [2024-12-06 22:12:04.154818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:21:31.453 [2024-12-06 22:12:04.154826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.171850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.171898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.453 [2024-12-06 22:12:04.171917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.974 ms 00:21:31.453 [2024-12-06 22:12:04.171925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.172095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.172109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.453 [2024-12-06 22:12:04.172129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:31.453 [2024-12-06 22:12:04.172138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.198689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.198892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.453 [2024-12-06 22:12:04.198914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.533 ms 00:21:31.453 [2024-12-06 22:12:04.198922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.224428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.224477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.453 [2024-12-06 22:12:04.224490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.412 ms 00:21:31.453 [2024-12-06 22:12:04.224498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.249151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.249205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.453 [2024-12-06 22:12:04.249216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.603 ms 00:21:31.453 [2024-12-06 22:12:04.249224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.274216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.453 [2024-12-06 22:12:04.274260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.453 [2024-12-06 22:12:04.274272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.911 ms 00:21:31.453 [2024-12-06 22:12:04.274279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.453 [2024-12-06 22:12:04.274327] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.453 [2024-12-06 22:12:04.274344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:31.453 [2024-12-06 22:12:04.274355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.453 [2024-12-06 22:12:04.274364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.453 [2024-12-06 22:12:04.274371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.274995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.454 [2024-12-06 22:12:04.275042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.455 [2024-12-06 22:12:04.275148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.455 [2024-12-06 22:12:04.275156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:31.455 [2024-12-06 22:12:04.275165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:31.455 [2024-12-06 22:12:04.275196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:31.455 [2024-12-06 22:12:04.275204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:31.455 [2024-12-06 22:12:04.275214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:31.455 [2024-12-06 22:12:04.275222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.455 [2024-12-06 22:12:04.275230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.455 [2024-12-06 22:12:04.275242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.455 [2024-12-06 22:12:04.275250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.455 [2024-12-06 22:12:04.275256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.455 [2024-12-06 22:12:04.275263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.455 [2024-12-06 22:12:04.275271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.455 [2024-12-06 22:12:04.275281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:21:31.455 [2024-12-06 22:12:04.275289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.455 [2024-12-06 22:12:04.288993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.455 [2024-12-06 22:12:04.289195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.455 [2024-12-06 22:12:04.289213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.670 ms 00:21:31.455 [2024-12-06 22:12:04.289222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.455 [2024-12-06 22:12:04.289632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.455 [2024-12-06 22:12:04.289653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.455 [2024-12-06 22:12:04.289664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:21:31.455 [2024-12-06 22:12:04.289672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.328485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.328660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.717 [2024-12-06 22:12:04.328680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.328696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.328801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.328813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.717 [2024-12-06 22:12:04.328822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.328830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.328890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.328900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.717 [2024-12-06 22:12:04.328909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.328917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.328939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.328947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.717 [2024-12-06 22:12:04.328955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.328963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.414403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.414463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.717 [2024-12-06 22:12:04.414479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.414487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.484548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.717 [2024-12-06 22:12:04.484562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.484571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.484667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.717 [2024-12-06 22:12:04.484677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.484686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.484736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.717 [2024-12-06 22:12:04.484746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.484755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.484868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.717 [2024-12-06 22:12:04.484876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.484885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.484932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.717 [2024-12-06 22:12:04.484944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.484952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.484998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.485009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.717 [2024-12-06 22:12:04.485018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.485027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.485075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.717 [2024-12-06 22:12:04.485090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.717 [2024-12-06 22:12:04.485101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.717 [2024-12-06 22:12:04.485112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.717 [2024-12-06 22:12:04.485300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.350 ms, result 0 00:21:32.659 00:21:32.659 00:21:32.659 22:12:05 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:32.659 22:12:05 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:33.229 22:12:05 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:33.229 [2024-12-06 22:12:05.898677] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:33.229 [2024-12-06 22:12:05.898813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77006 ] 00:21:33.230 [2024-12-06 22:12:06.054873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.490 [2024-12-06 22:12:06.184570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.751 [2024-12-06 22:12:06.479760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.751 [2024-12-06 22:12:06.479845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.013 [2024-12-06 22:12:06.642295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.642547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.013 [2024-12-06 22:12:06.642574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.013 [2024-12-06 22:12:06.642584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.013 [2024-12-06 22:12:06.646154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.646243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.013 [2024-12-06 22:12:06.646258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:21:34.013 [2024-12-06 22:12:06.646267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.013 [2024-12-06 22:12:06.646425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.013 [2024-12-06 22:12:06.647150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.013 [2024-12-06 22:12:06.647199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.647209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.013 [2024-12-06 22:12:06.647220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:21:34.013 [2024-12-06 22:12:06.647228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.013 [2024-12-06 22:12:06.649316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:34.013 [2024-12-06 22:12:06.663763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.663811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:34.013 [2024-12-06 22:12:06.663826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.449 ms 00:21:34.013 [2024-12-06 22:12:06.663834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.013 [2024-12-06 22:12:06.663959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.663973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:34.013 [2024-12-06 22:12:06.663984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:34.013 [2024-12-06 22:12:06.663993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.013 [2024-12-06 22:12:06.672393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.013 [2024-12-06 22:12:06.672435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.013 [2024-12-06 22:12:06.672447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:21:34.013 [2024-12-06 22:12:06.672455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.672568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.672579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.014 [2024-12-06 22:12:06.672589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:34.014 [2024-12-06 22:12:06.672598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.672632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.672642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.014 [2024-12-06 22:12:06.672650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:34.014 [2024-12-06 22:12:06.672658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.672681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:34.014 [2024-12-06 22:12:06.676690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.676730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.014 [2024-12-06 22:12:06.676742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.016 ms 00:21:34.014 [2024-12-06 22:12:06.676750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.676836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.676846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.014 [2024-12-06 22:12:06.676856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:34.014 [2024-12-06 22:12:06.676865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.676891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:34.014 [2024-12-06 22:12:06.676916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:34.014 [2024-12-06 22:12:06.676954] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:34.014 [2024-12-06 22:12:06.676971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:34.014 [2024-12-06 22:12:06.677078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.014 [2024-12-06 22:12:06.677092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.014 [2024-12-06 22:12:06.677103] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.014 [2024-12-06 22:12:06.677119] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677129] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677138] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:34.014 [2024-12-06 22:12:06.677146] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.014 [2024-12-06 22:12:06.677154] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.014 [2024-12-06 22:12:06.677163] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.014 [2024-12-06 22:12:06.677197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.677207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.014 [2024-12-06 22:12:06.677217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:34.014 [2024-12-06 22:12:06.677224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.677314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.014 [2024-12-06 22:12:06.677330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.014 [2024-12-06 22:12:06.677338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:34.014 [2024-12-06 22:12:06.677346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.014 [2024-12-06 22:12:06.677450] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.014 [2024-12-06 22:12:06.677464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.014 [2024-12-06 22:12:06.677474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.014 [2024-12-06 22:12:06.677499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.014 [2024-12-06 22:12:06.677533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.014 [2024-12-06 22:12:06.677548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.014 [2024-12-06 22:12:06.677562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:34.014 [2024-12-06 22:12:06.677569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.014 [2024-12-06 22:12:06.677579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.014 [2024-12-06 22:12:06.677588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:34.014 [2024-12-06 22:12:06.677596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.014 [2024-12-06 22:12:06.677612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.014 [2024-12-06 22:12:06.677634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.014 [2024-12-06 22:12:06.677656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.014 [2024-12-06 22:12:06.677676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.014 [2024-12-06 22:12:06.677697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.014 [2024-12-06 22:12:06.677710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.014 [2024-12-06 22:12:06.677717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.014 [2024-12-06 22:12:06.677729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.014 [2024-12-06 22:12:06.677736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:34.014 [2024-12-06 22:12:06.677742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.014 [2024-12-06 22:12:06.677750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.014 [2024-12-06 22:12:06.677757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:34.014 [2024-12-06 22:12:06.677763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.014 [2024-12-06 22:12:06.677776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:34.014 [2024-12-06 22:12:06.677782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.014 [2024-12-06 22:12:06.677788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.015 [2024-12-06 22:12:06.677796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.015 [2024-12-06 22:12:06.677805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.015 [2024-12-06 22:12:06.677812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.015 [2024-12-06 22:12:06.677822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.015 [2024-12-06 22:12:06.677829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.015 [2024-12-06 22:12:06.677836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.015 [2024-12-06 22:12:06.677843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.015 [2024-12-06 22:12:06.677850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.015 [2024-12-06 22:12:06.677857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.015 [2024-12-06 22:12:06.677866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.015 [2024-12-06 22:12:06.677876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.677884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:34.015 [2024-12-06 22:12:06.677892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:34.015 [2024-12-06 22:12:06.677899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:34.015 [2024-12-06 22:12:06.677906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:34.015 [2024-12-06 22:12:06.677913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:34.015 [2024-12-06 22:12:06.677921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:34.015 [2024-12-06 22:12:06.677928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:34.015 [2024-12-06 22:12:06.677935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:34.015 [2024-12-06 22:12:06.677942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:34.015 [2024-12-06 22:12:06.677949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.677956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.677963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.677969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.677977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:34.015 [2024-12-06 22:12:06.677984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.015 [2024-12-06 22:12:06.677992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.678000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.015 [2024-12-06 22:12:06.678007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.015 [2024-12-06 22:12:06.678014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.015 [2024-12-06 22:12:06.678022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.015 [2024-12-06 22:12:06.678029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.678039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.015 [2024-12-06 22:12:06.678047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:21:34.015 [2024-12-06 22:12:06.678053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.710331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.710378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.015 [2024-12-06 22:12:06.710390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.196 ms 00:21:34.015 [2024-12-06 22:12:06.710399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.710542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.710553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.015 [2024-12-06 22:12:06.710563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:34.015 [2024-12-06 22:12:06.710572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.761500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.761555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.015 [2024-12-06 22:12:06.761573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.902 ms 00:21:34.015 [2024-12-06 22:12:06.761582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.761698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.761711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.015 [2024-12-06 22:12:06.761722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.015 [2024-12-06 22:12:06.761730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.762279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.762303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.015 [2024-12-06 22:12:06.762326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:21:34.015 [2024-12-06 22:12:06.762335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.762493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.762505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.015 [2024-12-06 22:12:06.762514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:34.015 [2024-12-06 22:12:06.762523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.778793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.778836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.015 [2024-12-06 22:12:06.778847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.248 ms 00:21:34.015 [2024-12-06 22:12:06.778855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.793286] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:34.015 [2024-12-06 22:12:06.793332] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:34.015 [2024-12-06 22:12:06.793345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.793354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:34.015 [2024-12-06 22:12:06.793365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.378 ms 00:21:34.015 [2024-12-06 22:12:06.793373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.819560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.819610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:34.015 [2024-12-06 22:12:06.819632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.087 ms 00:21:34.015 [2024-12-06 22:12:06.819641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.832856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.832905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:34.015 [2024-12-06 22:12:06.832917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.115 ms 00:21:34.015 [2024-12-06 22:12:06.832925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.015 [2024-12-06 22:12:06.845670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.015 [2024-12-06 22:12:06.845726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:34.016 [2024-12-06 22:12:06.845738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:21:34.016 [2024-12-06 22:12:06.845746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-12-06 22:12:06.846461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-12-06 22:12:06.846491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.016 [2024-12-06 22:12:06.846505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:21:34.016 [2024-12-06 22:12:06.846513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.915560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.915614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:34.276 [2024-12-06 22:12:06.915629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.016 ms 00:21:34.276 [2024-12-06 22:12:06.915638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.926973] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:34.276 [2024-12-06 22:12:06.946263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.946312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.276 [2024-12-06 22:12:06.946325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.522 ms 00:21:34.276 [2024-12-06 22:12:06.946340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.946432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.946444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:34.276 [2024-12-06 22:12:06.946454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:34.276 [2024-12-06 22:12:06.946463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.946523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.946536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.276 [2024-12-06 22:12:06.946545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:34.276 [2024-12-06 22:12:06.946557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.946588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.946598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:34.276 [2024-12-06 22:12:06.946606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:34.276 [2024-12-06 22:12:06.946614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.946656] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:34.276 [2024-12-06 22:12:06.946668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.946677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:34.276 [2024-12-06 22:12:06.946685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:34.276 [2024-12-06 22:12:06.946693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.972722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.972774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:34.276 [2024-12-06 22:12:06.972788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.006 ms 00:21:34.276 [2024-12-06 22:12:06.972797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.276 [2024-12-06 22:12:06.972915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.276 [2024-12-06 22:12:06.972928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:34.276 [2024-12-06 22:12:06.972940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:34.276 [2024-12-06 22:12:06.972950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.277 [2024-12-06 22:12:06.974061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.277 [2024-12-06 22:12:06.977552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.445 ms, result 0 00:21:34.277 [2024-12-06 22:12:06.978816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.277 [2024-12-06 22:12:06.992356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.538  [2024-12-06T22:12:07.410Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-12-06 22:12:07.295888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.538 [2024-12-06 22:12:07.304925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.538 [2024-12-06 22:12:07.304976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.538 [2024-12-06 22:12:07.304998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.538 [2024-12-06 22:12:07.305007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.538 [2024-12-06 22:12:07.305032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:34.538 [2024-12-06 22:12:07.307967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.538 [2024-12-06 22:12:07.308005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.538 [2024-12-06 22:12:07.308017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:21:34.538 [2024-12-06 22:12:07.308052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.538 [2024-12-06 22:12:07.311592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.538 [2024-12-06 22:12:07.311635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.538 [2024-12-06 22:12:07.311646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:21:34.538 [2024-12-06 22:12:07.311654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.538 [2024-12-06 22:12:07.315977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.538 [2024-12-06 22:12:07.316013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.538 [2024-12-06 22:12:07.316024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:21:34.538 [2024-12-06 22:12:07.316045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.538 [2024-12-06 22:12:07.323039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.539 [2024-12-06 22:12:07.323241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:34.539 [2024-12-06 22:12:07.323261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:21:34.539 [2024-12-06 22:12:07.323271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.539 [2024-12-06 22:12:07.348295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.539 [2024-12-06 22:12:07.348341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.539 [2024-12-06 22:12:07.348353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.971 ms 00:21:34.539 [2024-12-06 22:12:07.348360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.539 [2024-12-06 22:12:07.365009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.539 [2024-12-06 22:12:07.365075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.539 [2024-12-06 22:12:07.365088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.601 ms 00:21:34.539 [2024-12-06 22:12:07.365103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.539 [2024-12-06 22:12:07.365316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.539 [2024-12-06 22:12:07.365333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.539 [2024-12-06 22:12:07.365351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:34.539 [2024-12-06 22:12:07.365359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.539 [2024-12-06 22:12:07.391557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.539 [2024-12-06 22:12:07.391600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:34.539 [2024-12-06 22:12:07.391611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:21:34.539 [2024-12-06 22:12:07.391618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.801 [2024-12-06 22:12:07.416736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.801 [2024-12-06 22:12:07.416779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:34.801 [2024-12-06 22:12:07.416790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.059 ms 00:21:34.801 [2024-12-06 22:12:07.416797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.801 [2024-12-06 22:12:07.441518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.801 [2024-12-06 22:12:07.441563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.801 [2024-12-06 22:12:07.441575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.674 ms 00:21:34.801 [2024-12-06 22:12:07.441582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.801 [2024-12-06 22:12:07.466067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.801 [2024-12-06 22:12:07.466110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.801 [2024-12-06 22:12:07.466122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.409 ms 00:21:34.801 [2024-12-06 22:12:07.466130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.801 [2024-12-06 22:12:07.466212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.801 [2024-12-06 22:12:07.466229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.801 [2024-12-06 22:12:07.466658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.466984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.802 [2024-12-06 22:12:07.467064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.802 [2024-12-06 22:12:07.467072] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:34.802 [2024-12-06 22:12:07.467080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:34.802 [2024-12-06 22:12:07.467087] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:34.802 [2024-12-06 22:12:07.467096] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:34.802 [2024-12-06 22:12:07.467104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:34.802 [2024-12-06 22:12:07.467111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.802 [2024-12-06 22:12:07.467119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.802 [2024-12-06 22:12:07.467130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.802 [2024-12-06 22:12:07.467137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.802 [2024-12-06 22:12:07.467143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.802 [2024-12-06 22:12:07.467150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.802 [2024-12-06 22:12:07.467158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.802 [2024-12-06 22:12:07.467168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:21:34.802 [2024-12-06 22:12:07.467186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.802 [2024-12-06 22:12:07.480307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.802 [2024-12-06 22:12:07.480345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.802 [2024-12-06 22:12:07.480356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.086 ms 00:21:34.802 [2024-12-06 22:12:07.480364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.802 [2024-12-06 22:12:07.480761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.802 [2024-12-06 22:12:07.480773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.802 [2024-12-06 22:12:07.480782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:21:34.802 [2024-12-06 22:12:07.480790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.802 [2024-12-06 22:12:07.519364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.802 [2024-12-06 22:12:07.519407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.802 [2024-12-06 22:12:07.519418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.802 [2024-12-06 22:12:07.519433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.802 [2024-12-06 22:12:07.519508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.802 [2024-12-06 22:12:07.519518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.802 [2024-12-06 22:12:07.519526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.802 [2024-12-06 22:12:07.519534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-12-06 22:12:07.519584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.803 [2024-12-06 22:12:07.519595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.803 [2024-12-06 22:12:07.519603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.803 [2024-12-06 22:12:07.519612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-12-06 22:12:07.519635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.803 [2024-12-06 22:12:07.519643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.803 [2024-12-06 22:12:07.519651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.803 [2024-12-06 22:12:07.519659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-12-06 22:12:07.601811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.803 [2024-12-06 22:12:07.602088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.803 [2024-12-06 22:12:07.602110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.803 [2024-12-06 22:12:07.602125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.064 [2024-12-06 22:12:07.671451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.064 [2024-12-06 22:12:07.671542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.064 [2024-12-06 22:12:07.671608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.064 [2024-12-06 22:12:07.671737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.064 [2024-12-06 22:12:07.671811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.064 [2024-12-06 22:12:07.671887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.671946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.064 [2024-12-06 22:12:07.671961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.064 [2024-12-06 22:12:07.671971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.064 [2024-12-06 22:12:07.671980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.064 [2024-12-06 22:12:07.672151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.211 ms, result 0 00:21:35.635 00:21:35.635 00:21:35.635 22:12:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77035 00:21:35.635 22:12:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77035 00:21:35.635 22:12:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77035 ']' 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.635 22:12:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:35.897 [2024-12-06 22:12:08.533991] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:35.897 [2024-12-06 22:12:08.534454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77035 ] 00:21:35.897 [2024-12-06 22:12:08.699311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.159 [2024-12-06 22:12:08.818677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.731 22:12:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.731 22:12:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:36.731 22:12:09 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:36.991 [2024-12-06 22:12:09.721074] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.991 [2024-12-06 22:12:09.721153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.252 [2024-12-06 22:12:09.900261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.900317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.252 [2024-12-06 22:12:09.900335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:37.252 [2024-12-06 22:12:09.900344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.903352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.903598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.252 [2024-12-06 22:12:09.903624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:21:37.252 [2024-12-06 22:12:09.903633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.903903] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.252 [2024-12-06 22:12:09.905013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.252 [2024-12-06 22:12:09.905066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.905077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.252 [2024-12-06 22:12:09.905089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:21:37.252 [2024-12-06 22:12:09.905098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.907217] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.252 [2024-12-06 22:12:09.921458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.921516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.252 [2024-12-06 22:12:09.921532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.249 ms 00:21:37.252 [2024-12-06 22:12:09.921543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.921655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.921672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.252 [2024-12-06 22:12:09.921681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:37.252 [2024-12-06 22:12:09.921692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.929493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.929542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.252 [2024-12-06 22:12:09.929553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.747 ms 00:21:37.252 [2024-12-06 22:12:09.929564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.929681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.929694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.252 [2024-12-06 22:12:09.929704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:37.252 [2024-12-06 22:12:09.929718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.929745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.929761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.252 [2024-12-06 22:12:09.929769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.252 [2024-12-06 22:12:09.929778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.929804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:37.252 [2024-12-06 22:12:09.933905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.934090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.252 [2024-12-06 22:12:09.934114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:21:37.252 [2024-12-06 22:12:09.934123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.934231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.934243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.252 [2024-12-06 22:12:09.934254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:37.252 [2024-12-06 22:12:09.934265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.252 [2024-12-06 22:12:09.934289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.252 [2024-12-06 22:12:09.934312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.252 [2024-12-06 22:12:09.934358] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.252 [2024-12-06 22:12:09.934374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.252 [2024-12-06 22:12:09.934482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.252 [2024-12-06 22:12:09.934494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.252 [2024-12-06 22:12:09.934510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.252 [2024-12-06 22:12:09.934521] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.252 [2024-12-06 22:12:09.934532] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.252 [2024-12-06 22:12:09.934541] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:37.252 [2024-12-06 22:12:09.934550] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.252 [2024-12-06 22:12:09.934558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.252 [2024-12-06 22:12:09.934569] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.252 [2024-12-06 22:12:09.934577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.252 [2024-12-06 22:12:09.934587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.252 [2024-12-06 22:12:09.934595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:37.253 [2024-12-06 22:12:09.934605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:09.934693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:09.934704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.253 [2024-12-06 22:12:09.934712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:37.253 [2024-12-06 22:12:09.934722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:09.934824] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.253 [2024-12-06 22:12:09.934838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.253 [2024-12-06 22:12:09.934847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.253 [2024-12-06 22:12:09.934856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.934864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.253 [2024-12-06 22:12:09.934875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.934884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:37.253 [2024-12-06 22:12:09.934895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.253 [2024-12-06 22:12:09.934903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:37.253 [2024-12-06 22:12:09.934913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.253 [2024-12-06 22:12:09.934921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.253 [2024-12-06 22:12:09.934930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:37.253 [2024-12-06 22:12:09.934939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.253 [2024-12-06 22:12:09.934948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.253 [2024-12-06 22:12:09.934955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:37.253 [2024-12-06 22:12:09.934964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.934970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.253 [2024-12-06 22:12:09.934979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:37.253 [2024-12-06 22:12:09.934992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.253 [2024-12-06 22:12:09.935009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.253 [2024-12-06 22:12:09.935034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.253 [2024-12-06 22:12:09.935057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.253 [2024-12-06 22:12:09.935082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.253 [2024-12-06 22:12:09.935107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.253 [2024-12-06 22:12:09.935121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.253 [2024-12-06 22:12:09.935130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:37.253 [2024-12-06 22:12:09.935138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.253 [2024-12-06 22:12:09.935146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.253 [2024-12-06 22:12:09.935153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:37.253 [2024-12-06 22:12:09.935163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.253 [2024-12-06 22:12:09.935195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:37.253 [2024-12-06 22:12:09.935205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.253 [2024-12-06 22:12:09.935228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.253 [2024-12-06 22:12:09.935238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.253 [2024-12-06 22:12:09.935256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.253 [2024-12-06 22:12:09.935263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.253 [2024-12-06 22:12:09.935273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.253 [2024-12-06 22:12:09.935281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.253 [2024-12-06 22:12:09.935290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.253 [2024-12-06 22:12:09.935297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.253 [2024-12-06 22:12:09.935308] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.253 [2024-12-06 22:12:09.935317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:37.253 [2024-12-06 22:12:09.935348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:37.253 [2024-12-06 22:12:09.935357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:37.253 [2024-12-06 22:12:09.935364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:37.253 [2024-12-06 22:12:09.935373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:37.253 [2024-12-06 22:12:09.935380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:37.253 [2024-12-06 22:12:09.935389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:37.253 [2024-12-06 22:12:09.935398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:37.253 [2024-12-06 22:12:09.935407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:37.253 [2024-12-06 22:12:09.935414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:37.253 [2024-12-06 22:12:09.935460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.253 [2024-12-06 22:12:09.935468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.253 [2024-12-06 22:12:09.935487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.253 [2024-12-06 22:12:09.935498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.253 [2024-12-06 22:12:09.935506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.253 [2024-12-06 22:12:09.935516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:09.935524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.253 [2024-12-06 22:12:09.935534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:21:37.253 [2024-12-06 22:12:09.935545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:09.967519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:09.967689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.253 [2024-12-06 22:12:09.967755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:21:37.253 [2024-12-06 22:12:09.967783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:09.967930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:09.968021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.253 [2024-12-06 22:12:09.968063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:37.253 [2024-12-06 22:12:09.968083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:10.002731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:10.002910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.253 [2024-12-06 22:12:10.002977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.566 ms 00:21:37.253 [2024-12-06 22:12:10.003002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.253 [2024-12-06 22:12:10.003111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.253 [2024-12-06 22:12:10.003138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.253 [2024-12-06 22:12:10.003245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:37.253 [2024-12-06 22:12:10.003270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.003867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.004047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.254 [2024-12-06 22:12:10.004109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:21:37.254 [2024-12-06 22:12:10.004336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.004541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.004566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.254 [2024-12-06 22:12:10.004637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:37.254 [2024-12-06 22:12:10.004659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.023086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.023279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.254 [2024-12-06 22:12:10.023343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.387 ms 00:21:37.254 [2024-12-06 22:12:10.023369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.054847] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:37.254 [2024-12-06 22:12:10.055053] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.254 [2024-12-06 22:12:10.055131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.055155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.254 [2024-12-06 22:12:10.055204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.622 ms 00:21:37.254 [2024-12-06 22:12:10.055233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.080868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.081042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.254 [2024-12-06 22:12:10.081108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.506 ms 00:21:37.254 [2024-12-06 22:12:10.081133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.094116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.094298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.254 [2024-12-06 22:12:10.094364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:21:37.254 [2024-12-06 22:12:10.094387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.115234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.115430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.254 [2024-12-06 22:12:10.115502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.383 ms 00:21:37.254 [2024-12-06 22:12:10.115527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.254 [2024-12-06 22:12:10.116284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.254 [2024-12-06 22:12:10.116428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.254 [2024-12-06 22:12:10.116500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:21:37.254 [2024-12-06 22:12:10.116524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.183336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.183550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:37.514 [2024-12-06 22:12:10.183620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.736 ms 00:21:37.514 [2024-12-06 22:12:10.183644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.194898] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:37.514 [2024-12-06 22:12:10.214083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.214290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.514 [2024-12-06 22:12:10.214314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.242 ms 00:21:37.514 [2024-12-06 22:12:10.214325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.214419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.214434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:37.514 [2024-12-06 22:12:10.214444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:37.514 [2024-12-06 22:12:10.214455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.214514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.214526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.514 [2024-12-06 22:12:10.214534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:37.514 [2024-12-06 22:12:10.214547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.214574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.214585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.514 [2024-12-06 22:12:10.214593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:37.514 [2024-12-06 22:12:10.214606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.214644] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:37.514 [2024-12-06 22:12:10.214660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.214672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:37.514 [2024-12-06 22:12:10.214682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:37.514 [2024-12-06 22:12:10.214690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.240520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.240569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.514 [2024-12-06 22:12:10.240586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.796 ms 00:21:37.514 [2024-12-06 22:12:10.240594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.240708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.514 [2024-12-06 22:12:10.240720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.514 [2024-12-06 22:12:10.240732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:37.514 [2024-12-06 22:12:10.240746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.514 [2024-12-06 22:12:10.241884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.514 [2024-12-06 22:12:10.245259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.290 ms, result 0 00:21:37.514 [2024-12-06 22:12:10.246953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.514 Some configs were skipped because the RPC state that can call them passed over. 00:21:37.514 22:12:10 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:37.775 [2024-12-06 22:12:10.495724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.775 [2024-12-06 22:12:10.495905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:37.775 [2024-12-06 22:12:10.495972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:21:37.775 [2024-12-06 22:12:10.495999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.775 [2024-12-06 22:12:10.496071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.531 ms, result 0 00:21:37.775 true 00:21:37.775 22:12:10 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:38.036 [2024-12-06 22:12:10.715723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.036 [2024-12-06 22:12:10.715902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:38.036 [2024-12-06 22:12:10.715969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:21:38.036 [2024-12-06 22:12:10.715992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.036 [2024-12-06 22:12:10.716096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.303 ms, result 0 00:21:38.036 true 00:21:38.036 22:12:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77035 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77035 ']' 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77035 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77035 00:21:38.036 killing process with pid 77035 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77035' 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77035 00:21:38.036 22:12:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77035 00:21:38.609 [2024-12-06 22:12:11.409463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.409516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:38.610 [2024-12-06 22:12:11.409527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.610 [2024-12-06 22:12:11.409534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.409564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:38.610 [2024-12-06 22:12:11.411655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.411680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:38.610 [2024-12-06 22:12:11.411691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:21:38.610 [2024-12-06 22:12:11.411696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.411922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.411929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:38.610 [2024-12-06 22:12:11.411937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:38.610 [2024-12-06 22:12:11.411943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.415208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.415232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:38.610 [2024-12-06 22:12:11.415242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:21:38.610 [2024-12-06 22:12:11.415248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.420506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.420540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:38.610 [2024-12-06 22:12:11.420550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:21:38.610 [2024-12-06 22:12:11.420556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.427841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.427968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:38.610 [2024-12-06 22:12:11.427984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.229 ms 00:21:38.610 [2024-12-06 22:12:11.427990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.434827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.434920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:38.610 [2024-12-06 22:12:11.434970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:21:38.610 [2024-12-06 22:12:11.434988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.435105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.435362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:38.610 [2024-12-06 22:12:11.435435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:38.610 [2024-12-06 22:12:11.435456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.443509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.443603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:38.610 [2024-12-06 22:12:11.443653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:21:38.610 [2024-12-06 22:12:11.443671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.451082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.451167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:38.610 [2024-12-06 22:12:11.451250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.373 ms 00:21:38.610 [2024-12-06 22:12:11.451267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.458558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.458636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:38.610 [2024-12-06 22:12:11.458687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.245 ms 00:21:38.610 [2024-12-06 22:12:11.458704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.465852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.610 [2024-12-06 22:12:11.465933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:38.610 [2024-12-06 22:12:11.465983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.092 ms 00:21:38.610 [2024-12-06 22:12:11.466000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.610 [2024-12-06 22:12:11.466042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:38.610 [2024-12-06 22:12:11.466093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:38.610 [2024-12-06 22:12:11.466968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.466991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.467981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:38.611 [2024-12-06 22:12:11.468833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:38.612 [2024-12-06 22:12:11.468855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:38.612 [2024-12-06 22:12:11.468907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:38.612 [2024-12-06 22:12:11.468940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:38.612 [2024-12-06 22:12:11.468960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:38.612 [2024-12-06 22:12:11.468984] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:38.612 [2024-12-06 22:12:11.468999] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:38.612 [2024-12-06 22:12:11.469053] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:38.612 [2024-12-06 22:12:11.469069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:38.612 [2024-12-06 22:12:11.469083] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:38.612 [2024-12-06 22:12:11.469099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:38.612 [2024-12-06 22:12:11.469113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:38.612 [2024-12-06 22:12:11.469152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:38.612 [2024-12-06 22:12:11.469168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:38.612 [2024-12-06 22:12:11.469193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.612 [2024-12-06 22:12:11.469234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:38.612 [2024-12-06 22:12:11.469254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:21:38.612 [2024-12-06 22:12:11.469268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.479038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.873 [2024-12-06 22:12:11.479118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:38.873 [2024-12-06 22:12:11.479133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.721 ms 00:21:38.873 [2024-12-06 22:12:11.479138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.480338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.873 [2024-12-06 22:12:11.480359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:38.873 [2024-12-06 22:12:11.480369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:21:38.873 [2024-12-06 22:12:11.480375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.515177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.515204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.873 [2024-12-06 22:12:11.515213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.515219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.515291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.515299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.873 [2024-12-06 22:12:11.515308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.515314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.515346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.515353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.873 [2024-12-06 22:12:11.515361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.515366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.515380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.515386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.873 [2024-12-06 22:12:11.515392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.515399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.573970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.574089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.873 [2024-12-06 22:12:11.574105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.574112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.873 [2024-12-06 22:12:11.622113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.873 [2024-12-06 22:12:11.622216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.873 [2024-12-06 22:12:11.622260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.873 [2024-12-06 22:12:11.622352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:38.873 [2024-12-06 22:12:11.622396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.873 [2024-12-06 22:12:11.622449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.873 [2024-12-06 22:12:11.622500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.873 [2024-12-06 22:12:11.622508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.873 [2024-12-06 22:12:11.622513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.873 [2024-12-06 22:12:11.622618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.139 ms, result 0 00:21:39.442 22:12:12 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:39.442 [2024-12-06 22:12:12.206456] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:39.442 [2024-12-06 22:12:12.206925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77089 ] 00:21:39.700 [2024-12-06 22:12:12.363137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.701 [2024-12-06 22:12:12.437663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.962 [2024-12-06 22:12:12.647013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.962 [2024-12-06 22:12:12.647063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.962 [2024-12-06 22:12:12.801874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.801907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.962 [2024-12-06 22:12:12.801917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:39.962 [2024-12-06 22:12:12.801924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.803981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.804009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.962 [2024-12-06 22:12:12.804016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:21:39.962 [2024-12-06 22:12:12.804022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.804091] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.962 [2024-12-06 22:12:12.804622] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.962 [2024-12-06 22:12:12.804679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.804686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.962 [2024-12-06 22:12:12.804693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:21:39.962 [2024-12-06 22:12:12.804698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.805663] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:39.962 [2024-12-06 22:12:12.815544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.815665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:39.962 [2024-12-06 22:12:12.815679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.882 ms 00:21:39.962 [2024-12-06 22:12:12.815685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.815748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.815757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:39.962 [2024-12-06 22:12:12.815764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:39.962 [2024-12-06 22:12:12.815769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.820201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.820222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.962 [2024-12-06 22:12:12.820230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.402 ms 00:21:39.962 [2024-12-06 22:12:12.820236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.820310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.820318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.962 [2024-12-06 22:12:12.820324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:39.962 [2024-12-06 22:12:12.820330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.820348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.820354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.962 [2024-12-06 22:12:12.820360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:39.962 [2024-12-06 22:12:12.820365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.820383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:39.962 [2024-12-06 22:12:12.823015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.823121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.962 [2024-12-06 22:12:12.823133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:21:39.962 [2024-12-06 22:12:12.823140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.823184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.823192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.962 [2024-12-06 22:12:12.823198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:39.962 [2024-12-06 22:12:12.823204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.823220] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:39.962 [2024-12-06 22:12:12.823235] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:39.962 [2024-12-06 22:12:12.823261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:39.962 [2024-12-06 22:12:12.823273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:39.962 [2024-12-06 22:12:12.823352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.962 [2024-12-06 22:12:12.823360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.962 [2024-12-06 22:12:12.823369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:39.962 [2024-12-06 22:12:12.823378] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.962 [2024-12-06 22:12:12.823384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.962 [2024-12-06 22:12:12.823390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:39.962 [2024-12-06 22:12:12.823396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.962 [2024-12-06 22:12:12.823401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.962 [2024-12-06 22:12:12.823407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.962 [2024-12-06 22:12:12.823413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.823420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.962 [2024-12-06 22:12:12.823426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:39.962 [2024-12-06 22:12:12.823432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.962 [2024-12-06 22:12:12.823499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.962 [2024-12-06 22:12:12.823508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.963 [2024-12-06 22:12:12.823514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:39.963 [2024-12-06 22:12:12.823519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.963 [2024-12-06 22:12:12.823594] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.963 [2024-12-06 22:12:12.823602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.963 [2024-12-06 22:12:12.823608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.963 [2024-12-06 22:12:12.823627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.963 [2024-12-06 22:12:12.823645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.963 [2024-12-06 22:12:12.823656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.963 [2024-12-06 22:12:12.823666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:39.963 [2024-12-06 22:12:12.823671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.963 [2024-12-06 22:12:12.823676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.963 [2024-12-06 22:12:12.823681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:39.963 [2024-12-06 22:12:12.823686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.963 [2024-12-06 22:12:12.823696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.963 [2024-12-06 22:12:12.823712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.963 [2024-12-06 22:12:12.823726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.963 [2024-12-06 22:12:12.823741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.963 [2024-12-06 22:12:12.823758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.963 [2024-12-06 22:12:12.823773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.963 [2024-12-06 22:12:12.823783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.963 [2024-12-06 22:12:12.823788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:39.963 [2024-12-06 22:12:12.823793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.963 [2024-12-06 22:12:12.823798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.963 [2024-12-06 22:12:12.823802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:39.963 [2024-12-06 22:12:12.823807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.963 [2024-12-06 22:12:12.823818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:39.963 [2024-12-06 22:12:12.823824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823829] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.963 [2024-12-06 22:12:12.823835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.963 [2024-12-06 22:12:12.823843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.963 [2024-12-06 22:12:12.823854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.963 [2024-12-06 22:12:12.823859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.963 [2024-12-06 22:12:12.823864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.963 [2024-12-06 22:12:12.823869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.963 [2024-12-06 22:12:12.823874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.963 [2024-12-06 22:12:12.823879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.963 [2024-12-06 22:12:12.823885] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.963 [2024-12-06 22:12:12.823892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:39.963 [2024-12-06 22:12:12.823904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:39.963 [2024-12-06 22:12:12.823909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:39.963 [2024-12-06 22:12:12.823915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:39.963 [2024-12-06 22:12:12.823921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:39.963 [2024-12-06 22:12:12.823927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:39.963 [2024-12-06 22:12:12.823933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:39.963 [2024-12-06 22:12:12.823938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:39.963 [2024-12-06 22:12:12.823944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:39.963 [2024-12-06 22:12:12.823949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:39.963 [2024-12-06 22:12:12.823976] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.963 [2024-12-06 22:12:12.823983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.963 [2024-12-06 22:12:12.823989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.964 [2024-12-06 22:12:12.823994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.964 [2024-12-06 22:12:12.824000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.964 [2024-12-06 22:12:12.824006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.964 [2024-12-06 22:12:12.824012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.964 [2024-12-06 22:12:12.824020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.964 [2024-12-06 22:12:12.824026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:21:39.964 [2024-12-06 22:12:12.824039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.844744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.844770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.224 [2024-12-06 22:12:12.844778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.662 ms 00:21:40.224 [2024-12-06 22:12:12.844784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.844878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.844886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:40.224 [2024-12-06 22:12:12.844892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:40.224 [2024-12-06 22:12:12.844898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.881784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.881814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.224 [2024-12-06 22:12:12.881825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.869 ms 00:21:40.224 [2024-12-06 22:12:12.881832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.881889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.881898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.224 [2024-12-06 22:12:12.881905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:40.224 [2024-12-06 22:12:12.881912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.882208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.882222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.224 [2024-12-06 22:12:12.882230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:21:40.224 [2024-12-06 22:12:12.882240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.882364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.882372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.224 [2024-12-06 22:12:12.882379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:40.224 [2024-12-06 22:12:12.882386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.893041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.224 [2024-12-06 22:12:12.893068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.224 [2024-12-06 22:12:12.893076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.638 ms 00:21:40.224 [2024-12-06 22:12:12.893082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.224 [2024-12-06 22:12:12.903099] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:40.224 [2024-12-06 22:12:12.903126] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:40.224 [2024-12-06 22:12:12.903135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.903141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:40.225 [2024-12-06 22:12:12.903148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.982 ms 00:21:40.225 [2024-12-06 22:12:12.903153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.921913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.921939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:40.225 [2024-12-06 22:12:12.921948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.691 ms 00:21:40.225 [2024-12-06 22:12:12.921955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.931186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.931209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:40.225 [2024-12-06 22:12:12.931217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.177 ms 00:21:40.225 [2024-12-06 22:12:12.931223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.940180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.940203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:40.225 [2024-12-06 22:12:12.940210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:21:40.225 [2024-12-06 22:12:12.940216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.940676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.940692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:40.225 [2024-12-06 22:12:12.940699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:40.225 [2024-12-06 22:12:12.940706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.985699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:12.985731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:40.225 [2024-12-06 22:12:12.985742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.975 ms 00:21:40.225 [2024-12-06 22:12:12.985748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:12.993635] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:40.225 [2024-12-06 22:12:13.005230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.005252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:40.225 [2024-12-06 22:12:13.005261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.422 ms 00:21:40.225 [2024-12-06 22:12:13.005272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.005344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.005353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:40.225 [2024-12-06 22:12:13.005361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:40.225 [2024-12-06 22:12:13.005367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.005402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.005409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:40.225 [2024-12-06 22:12:13.005415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:40.225 [2024-12-06 22:12:13.005423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.005447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.005454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:40.225 [2024-12-06 22:12:13.005461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:40.225 [2024-12-06 22:12:13.005467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.005491] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:40.225 [2024-12-06 22:12:13.005498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.005504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:40.225 [2024-12-06 22:12:13.005511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:40.225 [2024-12-06 22:12:13.005520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.024110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.024259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:40.225 [2024-12-06 22:12:13.024273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.574 ms 00:21:40.225 [2024-12-06 22:12:13.024279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.024347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.225 [2024-12-06 22:12:13.024355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:40.225 [2024-12-06 22:12:13.024362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:40.225 [2024-12-06 22:12:13.024368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.225 [2024-12-06 22:12:13.024992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:40.225 [2024-12-06 22:12:13.027322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.903 ms, result 0 00:21:40.225 [2024-12-06 22:12:13.028284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.225 [2024-12-06 22:12:13.043065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.613  [2024-12-06T22:12:15.429Z] Copying: 17/256 [MB] (17 MBps) [2024-12-06T22:12:16.372Z] Copying: 27/256 [MB] (10 MBps) [2024-12-06T22:12:17.317Z] Copying: 45/256 [MB] (18 MBps) [2024-12-06T22:12:18.263Z] Copying: 65/256 [MB] (19 MBps) [2024-12-06T22:12:19.209Z] Copying: 83/256 [MB] (18 MBps) [2024-12-06T22:12:20.152Z] Copying: 102/256 [MB] (19 MBps) [2024-12-06T22:12:21.094Z] Copying: 120/256 [MB] (17 MBps) [2024-12-06T22:12:22.482Z] Copying: 138/256 [MB] (18 MBps) [2024-12-06T22:12:23.425Z] Copying: 153/256 [MB] (14 MBps) [2024-12-06T22:12:24.392Z] Copying: 171/256 [MB] (17 MBps) [2024-12-06T22:12:25.423Z] Copying: 183/256 [MB] (12 MBps) [2024-12-06T22:12:26.366Z] Copying: 204/256 [MB] (20 MBps) [2024-12-06T22:12:27.301Z] Copying: 218/256 [MB] (13 MBps) [2024-12-06T22:12:28.235Z] Copying: 229/256 [MB] (11 MBps) [2024-12-06T22:12:29.170Z] Copying: 241/256 [MB] (12 MBps) [2024-12-06T22:12:29.430Z] Copying: 253/256 [MB] (11 MBps) [2024-12-06T22:12:29.691Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-06 22:12:29.538389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.819 [2024-12-06 22:12:29.553082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.553131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.819 [2024-12-06 22:12:29.553157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.819 [2024-12-06 22:12:29.553167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.553223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.819 [2024-12-06 22:12:29.556389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.556428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.819 [2024-12-06 22:12:29.556437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:21:56.819 [2024-12-06 22:12:29.556446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.556681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.556695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.819 [2024-12-06 22:12:29.556703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:56.819 [2024-12-06 22:12:29.556710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.559667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.559858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.819 [2024-12-06 22:12:29.559873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:21:56.819 [2024-12-06 22:12:29.559880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.565127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.565160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.819 [2024-12-06 22:12:29.565169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:21:56.819 [2024-12-06 22:12:29.565188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.585169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.585211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.819 [2024-12-06 22:12:29.585221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.920 ms 00:21:56.819 [2024-12-06 22:12:29.585228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.604011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.604087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.819 [2024-12-06 22:12:29.604115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.740 ms 00:21:56.819 [2024-12-06 22:12:29.604124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.604329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.604343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.819 [2024-12-06 22:12:29.604363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:56.819 [2024-12-06 22:12:29.604371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.630269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.630479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:56.819 [2024-12-06 22:12:29.630502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:21:56.819 [2024-12-06 22:12:29.630511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.656657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.656706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:56.819 [2024-12-06 22:12:29.656719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.974 ms 00:21:56.819 [2024-12-06 22:12:29.656727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.819 [2024-12-06 22:12:29.681834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.819 [2024-12-06 22:12:29.681879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.819 [2024-12-06 22:12:29.681891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.038 ms 00:21:56.819 [2024-12-06 22:12:29.681899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.081 [2024-12-06 22:12:29.707490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.081 [2024-12-06 22:12:29.707538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:57.081 [2024-12-06 22:12:29.707550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:21:57.081 [2024-12-06 22:12:29.707558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.081 [2024-12-06 22:12:29.707609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:57.081 [2024-12-06 22:12:29.707627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:57.081 [2024-12-06 22:12:29.707771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.707995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:57.082 [2024-12-06 22:12:29.708390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:57.083 [2024-12-06 22:12:29.708518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:57.083 [2024-12-06 22:12:29.708526] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fdfad126-9ab1-4e1e-9416-c27cc38aa4c4 00:21:57.083 [2024-12-06 22:12:29.708536] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:57.083 [2024-12-06 22:12:29.708544] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:57.083 [2024-12-06 22:12:29.708552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:57.083 [2024-12-06 22:12:29.708561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:57.083 [2024-12-06 22:12:29.708570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:57.083 [2024-12-06 22:12:29.708578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:57.083 [2024-12-06 22:12:29.708589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:57.083 [2024-12-06 22:12:29.708596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:57.083 [2024-12-06 22:12:29.708602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:57.083 [2024-12-06 22:12:29.708610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.083 [2024-12-06 22:12:29.708618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:57.083 [2024-12-06 22:12:29.708627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:21:57.083 [2024-12-06 22:12:29.708634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.722540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.083 [2024-12-06 22:12:29.722584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:57.083 [2024-12-06 22:12:29.722596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.870 ms 00:21:57.083 [2024-12-06 22:12:29.722605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.723016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.083 [2024-12-06 22:12:29.723027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:57.083 [2024-12-06 22:12:29.723036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:21:57.083 [2024-12-06 22:12:29.723043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.762512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.762562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.083 [2024-12-06 22:12:29.762574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.762590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.762711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.762723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.083 [2024-12-06 22:12:29.762731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.762739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.762794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.762804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.083 [2024-12-06 22:12:29.762813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.762822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.762845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.762854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.083 [2024-12-06 22:12:29.762862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.762870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.847802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.847861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.083 [2024-12-06 22:12:29.847875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.847884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.083 [2024-12-06 22:12:29.917325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.083 [2024-12-06 22:12:29.917442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.083 [2024-12-06 22:12:29.917508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.083 [2024-12-06 22:12:29.917639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.083 [2024-12-06 22:12:29.917704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.083 [2024-12-06 22:12:29.917781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.083 [2024-12-06 22:12:29.917790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.083 [2024-12-06 22:12:29.917843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.083 [2024-12-06 22:12:29.917858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.084 [2024-12-06 22:12:29.917868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.084 [2024-12-06 22:12:29.917876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.084 [2024-12-06 22:12:29.918044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.950 ms, result 0 00:21:58.027 00:21:58.027 00:21:58.027 22:12:30 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:58.599 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:58.599 Process with pid 77035 is not found 00:21:58.599 22:12:31 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77035 00:21:58.599 22:12:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77035 ']' 00:21:58.599 22:12:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77035 00:21:58.599 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77035) - No such process 00:21:58.599 22:12:31 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77035 is not found' 00:21:58.599 ************************************ 00:21:58.599 END TEST ftl_trim 00:21:58.599 ************************************ 00:21:58.599 00:21:58.599 real 1m16.076s 00:21:58.599 user 1m43.420s 00:21:58.599 sys 0m5.747s 00:21:58.599 22:12:31 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.599 22:12:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:58.599 22:12:31 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:58.599 22:12:31 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.599 22:12:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.599 22:12:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:58.599 ************************************ 00:21:58.599 START TEST ftl_restore 00:21:58.599 ************************************ 00:21:58.599 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:58.861 * Looking for test storage... 00:21:58.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.861 22:12:31 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.861 --rc genhtml_branch_coverage=1 00:21:58.861 --rc genhtml_function_coverage=1 00:21:58.861 --rc genhtml_legend=1 00:21:58.861 --rc geninfo_all_blocks=1 00:21:58.861 --rc geninfo_unexecuted_blocks=1 00:21:58.861 00:21:58.861 ' 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.861 --rc genhtml_branch_coverage=1 00:21:58.861 --rc genhtml_function_coverage=1 00:21:58.861 --rc genhtml_legend=1 00:21:58.861 --rc geninfo_all_blocks=1 00:21:58.861 --rc geninfo_unexecuted_blocks=1 00:21:58.861 00:21:58.861 ' 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.861 --rc genhtml_branch_coverage=1 00:21:58.861 --rc genhtml_function_coverage=1 00:21:58.861 --rc genhtml_legend=1 00:21:58.861 --rc geninfo_all_blocks=1 00:21:58.861 --rc geninfo_unexecuted_blocks=1 00:21:58.861 00:21:58.861 ' 00:21:58.861 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.861 --rc genhtml_branch_coverage=1 00:21:58.861 --rc genhtml_function_coverage=1 00:21:58.861 --rc genhtml_legend=1 00:21:58.861 --rc geninfo_all_blocks=1 00:21:58.861 --rc geninfo_unexecuted_blocks=1 00:21:58.861 00:21:58.861 ' 00:21:58.861 22:12:31 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RRWrTyUaLF 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77355 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77355 00:21:58.862 22:12:31 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77355 ']' 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.862 22:12:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:58.862 [2024-12-06 22:12:31.685006] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:21:58.862 [2024-12-06 22:12:31.685430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77355 ] 00:21:59.123 [2024-12-06 22:12:31.851607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.123 [2024-12-06 22:12:31.972752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.065 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.065 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:00.065 22:12:32 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:00.326 22:12:32 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:00.327 22:12:32 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:00.327 22:12:32 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:00.327 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:00.327 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:00.327 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:00.327 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:00.327 22:12:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:00.588 { 00:22:00.588 "name": "nvme0n1", 00:22:00.588 "aliases": [ 00:22:00.588 "34d25621-d0e7-4a77-a254-c1e4835a98e5" 00:22:00.588 ], 00:22:00.588 "product_name": "NVMe disk", 00:22:00.588 "block_size": 4096, 00:22:00.588 "num_blocks": 1310720, 00:22:00.588 "uuid": "34d25621-d0e7-4a77-a254-c1e4835a98e5", 00:22:00.588 "numa_id": -1, 00:22:00.588 "assigned_rate_limits": { 00:22:00.588 "rw_ios_per_sec": 0, 00:22:00.588 "rw_mbytes_per_sec": 0, 00:22:00.588 "r_mbytes_per_sec": 0, 00:22:00.588 "w_mbytes_per_sec": 0 00:22:00.588 }, 00:22:00.588 "claimed": true, 00:22:00.588 "claim_type": "read_many_write_one", 00:22:00.588 "zoned": false, 00:22:00.588 "supported_io_types": { 00:22:00.588 "read": true, 00:22:00.588 "write": true, 00:22:00.588 "unmap": true, 00:22:00.588 "flush": true, 00:22:00.588 "reset": true, 00:22:00.588 "nvme_admin": true, 00:22:00.588 "nvme_io": true, 00:22:00.588 "nvme_io_md": false, 00:22:00.588 "write_zeroes": true, 00:22:00.588 "zcopy": false, 00:22:00.588 "get_zone_info": false, 00:22:00.588 "zone_management": false, 00:22:00.588 "zone_append": false, 00:22:00.588 "compare": true, 00:22:00.588 "compare_and_write": false, 00:22:00.588 "abort": true, 00:22:00.588 "seek_hole": false, 00:22:00.588 "seek_data": false, 00:22:00.588 "copy": true, 00:22:00.588 "nvme_iov_md": false 00:22:00.588 }, 00:22:00.588 "driver_specific": { 00:22:00.588 "nvme": [ 00:22:00.588 { 00:22:00.588 "pci_address": "0000:00:11.0", 00:22:00.588 "trid": { 00:22:00.588 "trtype": "PCIe", 00:22:00.588 "traddr": "0000:00:11.0" 00:22:00.588 }, 00:22:00.588 "ctrlr_data": { 00:22:00.588 "cntlid": 0, 00:22:00.588 "vendor_id": "0x1b36", 00:22:00.588 "model_number": "QEMU NVMe Ctrl", 00:22:00.588 "serial_number": "12341", 00:22:00.588 "firmware_revision": "8.0.0", 00:22:00.588 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:00.588 "oacs": { 00:22:00.588 "security": 0, 00:22:00.588 "format": 1, 00:22:00.588 "firmware": 0, 00:22:00.588 "ns_manage": 1 00:22:00.588 }, 00:22:00.588 "multi_ctrlr": false, 00:22:00.588 "ana_reporting": false 00:22:00.588 }, 00:22:00.588 "vs": { 00:22:00.588 "nvme_version": "1.4" 00:22:00.588 }, 00:22:00.588 "ns_data": { 00:22:00.588 "id": 1, 00:22:00.588 "can_share": false 00:22:00.588 } 00:22:00.588 } 00:22:00.588 ], 00:22:00.588 "mp_policy": "active_passive" 00:22:00.588 } 00:22:00.588 } 00:22:00.588 ]' 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:00.588 22:12:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:00.588 22:12:33 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:00.588 22:12:33 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:00.588 22:12:33 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:00.588 22:12:33 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:00.588 22:12:33 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.849 22:12:33 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=05089f38-eae2-42db-a370-e26f2e0f1ae8 00:22:00.849 22:12:33 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:00.849 22:12:33 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05089f38-eae2-42db-a370-e26f2e0f1ae8 00:22:01.111 22:12:33 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:01.111 22:12:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b3ee7475-0bad-414a-8cf4-b183f8643b8d 00:22:01.111 22:12:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b3ee7475-0bad-414a-8cf4-b183f8643b8d 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:01.372 22:12:34 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.372 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.372 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.372 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:01.372 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:01.372 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:01.633 { 00:22:01.633 "name": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:01.633 "aliases": [ 00:22:01.633 "lvs/nvme0n1p0" 00:22:01.633 ], 00:22:01.633 "product_name": "Logical Volume", 00:22:01.633 "block_size": 4096, 00:22:01.633 "num_blocks": 26476544, 00:22:01.633 "uuid": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:01.633 "assigned_rate_limits": { 00:22:01.633 "rw_ios_per_sec": 0, 00:22:01.633 "rw_mbytes_per_sec": 0, 00:22:01.633 "r_mbytes_per_sec": 0, 00:22:01.633 "w_mbytes_per_sec": 0 00:22:01.633 }, 00:22:01.633 "claimed": false, 00:22:01.633 "zoned": false, 00:22:01.633 "supported_io_types": { 00:22:01.633 "read": true, 00:22:01.633 "write": true, 00:22:01.633 "unmap": true, 00:22:01.633 "flush": false, 00:22:01.633 "reset": true, 00:22:01.633 "nvme_admin": false, 00:22:01.633 "nvme_io": false, 00:22:01.633 "nvme_io_md": false, 00:22:01.633 "write_zeroes": true, 00:22:01.633 "zcopy": false, 00:22:01.633 "get_zone_info": false, 00:22:01.633 "zone_management": false, 00:22:01.633 "zone_append": false, 00:22:01.633 "compare": false, 00:22:01.633 "compare_and_write": false, 00:22:01.633 "abort": false, 00:22:01.633 "seek_hole": true, 00:22:01.633 "seek_data": true, 00:22:01.633 "copy": false, 00:22:01.633 "nvme_iov_md": false 00:22:01.633 }, 00:22:01.633 "driver_specific": { 00:22:01.633 "lvol": { 00:22:01.633 "lvol_store_uuid": "b3ee7475-0bad-414a-8cf4-b183f8643b8d", 00:22:01.633 "base_bdev": "nvme0n1", 00:22:01.633 "thin_provision": true, 00:22:01.633 "num_allocated_clusters": 0, 00:22:01.633 "snapshot": false, 00:22:01.633 "clone": false, 00:22:01.633 "esnap_clone": false 00:22:01.633 } 00:22:01.633 } 00:22:01.633 } 00:22:01.633 ]' 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:01.633 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:01.633 22:12:34 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:01.633 22:12:34 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:01.633 22:12:34 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:01.894 22:12:34 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:01.894 22:12:34 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:01.894 22:12:34 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.894 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:01.894 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.894 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:01.894 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:01.894 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:02.155 { 00:22:02.155 "name": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:02.155 "aliases": [ 00:22:02.155 "lvs/nvme0n1p0" 00:22:02.155 ], 00:22:02.155 "product_name": "Logical Volume", 00:22:02.155 "block_size": 4096, 00:22:02.155 "num_blocks": 26476544, 00:22:02.155 "uuid": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:02.155 "assigned_rate_limits": { 00:22:02.155 "rw_ios_per_sec": 0, 00:22:02.155 "rw_mbytes_per_sec": 0, 00:22:02.155 "r_mbytes_per_sec": 0, 00:22:02.155 "w_mbytes_per_sec": 0 00:22:02.155 }, 00:22:02.155 "claimed": false, 00:22:02.155 "zoned": false, 00:22:02.155 "supported_io_types": { 00:22:02.155 "read": true, 00:22:02.155 "write": true, 00:22:02.155 "unmap": true, 00:22:02.155 "flush": false, 00:22:02.155 "reset": true, 00:22:02.155 "nvme_admin": false, 00:22:02.155 "nvme_io": false, 00:22:02.155 "nvme_io_md": false, 00:22:02.155 "write_zeroes": true, 00:22:02.155 "zcopy": false, 00:22:02.155 "get_zone_info": false, 00:22:02.155 "zone_management": false, 00:22:02.155 "zone_append": false, 00:22:02.155 "compare": false, 00:22:02.155 "compare_and_write": false, 00:22:02.155 "abort": false, 00:22:02.155 "seek_hole": true, 00:22:02.155 "seek_data": true, 00:22:02.155 "copy": false, 00:22:02.155 "nvme_iov_md": false 00:22:02.155 }, 00:22:02.155 "driver_specific": { 00:22:02.155 "lvol": { 00:22:02.155 "lvol_store_uuid": "b3ee7475-0bad-414a-8cf4-b183f8643b8d", 00:22:02.155 "base_bdev": "nvme0n1", 00:22:02.155 "thin_provision": true, 00:22:02.155 "num_allocated_clusters": 0, 00:22:02.155 "snapshot": false, 00:22:02.155 "clone": false, 00:22:02.155 "esnap_clone": false 00:22:02.155 } 00:22:02.155 } 00:22:02.155 } 00:22:02.155 ]' 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:02.155 22:12:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:02.155 22:12:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:02.155 22:12:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:02.416 22:12:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:02.416 22:12:35 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:02.416 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:02.416 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:02.416 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:02.416 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:02.416 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fcd9aa8f-cbee-4e03-86ae-2e70ec047971 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:02.677 { 00:22:02.677 "name": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:02.677 "aliases": [ 00:22:02.677 "lvs/nvme0n1p0" 00:22:02.677 ], 00:22:02.677 "product_name": "Logical Volume", 00:22:02.677 "block_size": 4096, 00:22:02.677 "num_blocks": 26476544, 00:22:02.677 "uuid": "fcd9aa8f-cbee-4e03-86ae-2e70ec047971", 00:22:02.677 "assigned_rate_limits": { 00:22:02.677 "rw_ios_per_sec": 0, 00:22:02.677 "rw_mbytes_per_sec": 0, 00:22:02.677 "r_mbytes_per_sec": 0, 00:22:02.677 "w_mbytes_per_sec": 0 00:22:02.677 }, 00:22:02.677 "claimed": false, 00:22:02.677 "zoned": false, 00:22:02.677 "supported_io_types": { 00:22:02.677 "read": true, 00:22:02.677 "write": true, 00:22:02.677 "unmap": true, 00:22:02.677 "flush": false, 00:22:02.677 "reset": true, 00:22:02.677 "nvme_admin": false, 00:22:02.677 "nvme_io": false, 00:22:02.677 "nvme_io_md": false, 00:22:02.677 "write_zeroes": true, 00:22:02.677 "zcopy": false, 00:22:02.677 "get_zone_info": false, 00:22:02.677 "zone_management": false, 00:22:02.677 "zone_append": false, 00:22:02.677 "compare": false, 00:22:02.677 "compare_and_write": false, 00:22:02.677 "abort": false, 00:22:02.677 "seek_hole": true, 00:22:02.677 "seek_data": true, 00:22:02.677 "copy": false, 00:22:02.677 "nvme_iov_md": false 00:22:02.677 }, 00:22:02.677 "driver_specific": { 00:22:02.677 "lvol": { 00:22:02.677 "lvol_store_uuid": "b3ee7475-0bad-414a-8cf4-b183f8643b8d", 00:22:02.677 "base_bdev": "nvme0n1", 00:22:02.677 "thin_provision": true, 00:22:02.677 "num_allocated_clusters": 0, 00:22:02.677 "snapshot": false, 00:22:02.677 "clone": false, 00:22:02.677 "esnap_clone": false 00:22:02.677 } 00:22:02.677 } 00:22:02.677 } 00:22:02.677 ]' 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:02.677 22:12:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fcd9aa8f-cbee-4e03-86ae-2e70ec047971 --l2p_dram_limit 10' 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:02.677 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:02.677 22:12:35 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fcd9aa8f-cbee-4e03-86ae-2e70ec047971 --l2p_dram_limit 10 -c nvc0n1p0 00:22:02.939 [2024-12-06 22:12:35.613254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.613293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.939 [2024-12-06 22:12:35.613306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.939 [2024-12-06 22:12:35.613313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.613358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.613366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.939 [2024-12-06 22:12:35.613373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:02.939 [2024-12-06 22:12:35.613379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.613398] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.939 [2024-12-06 22:12:35.613976] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.939 [2024-12-06 22:12:35.613991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.613997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.939 [2024-12-06 22:12:35.614005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:22:02.939 [2024-12-06 22:12:35.614011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.614034] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:22:02.939 [2024-12-06 22:12:35.615012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.615036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:02.939 [2024-12-06 22:12:35.615043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:02.939 [2024-12-06 22:12:35.615050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.619835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.619865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.939 [2024-12-06 22:12:35.619873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:22:02.939 [2024-12-06 22:12:35.619880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.619949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.619958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.939 [2024-12-06 22:12:35.619964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:02.939 [2024-12-06 22:12:35.619973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.620007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.620016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.939 [2024-12-06 22:12:35.620023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.939 [2024-12-06 22:12:35.620030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.620054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.939 [2024-12-06 22:12:35.622936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.622962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.939 [2024-12-06 22:12:35.622973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:22:02.939 [2024-12-06 22:12:35.622978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.623006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.623012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.939 [2024-12-06 22:12:35.623020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:02.939 [2024-12-06 22:12:35.623026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.623046] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:02.939 [2024-12-06 22:12:35.623154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.939 [2024-12-06 22:12:35.623166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.939 [2024-12-06 22:12:35.623188] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.939 [2024-12-06 22:12:35.623198] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.939 [2024-12-06 22:12:35.623218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.939 [2024-12-06 22:12:35.623227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.939 [2024-12-06 22:12:35.623233] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.939 [2024-12-06 22:12:35.623240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.623250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.939 [2024-12-06 22:12:35.623258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:02.939 [2024-12-06 22:12:35.623263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.623329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.939 [2024-12-06 22:12:35.623335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.939 [2024-12-06 22:12:35.623343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:02.939 [2024-12-06 22:12:35.623348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.939 [2024-12-06 22:12:35.623426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.939 [2024-12-06 22:12:35.623434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.939 [2024-12-06 22:12:35.623442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.939 [2024-12-06 22:12:35.623460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.939 [2024-12-06 22:12:35.623478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.939 [2024-12-06 22:12:35.623490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.939 [2024-12-06 22:12:35.623496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.939 [2024-12-06 22:12:35.623502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.939 [2024-12-06 22:12:35.623507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.939 [2024-12-06 22:12:35.623514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.939 [2024-12-06 22:12:35.623519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.939 [2024-12-06 22:12:35.623532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.939 [2024-12-06 22:12:35.623551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.939 [2024-12-06 22:12:35.623567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.939 [2024-12-06 22:12:35.623584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.939 [2024-12-06 22:12:35.623600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.939 [2024-12-06 22:12:35.623611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.939 [2024-12-06 22:12:35.623618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.939 [2024-12-06 22:12:35.623630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.939 [2024-12-06 22:12:35.623635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.939 [2024-12-06 22:12:35.623642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.939 [2024-12-06 22:12:35.623646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.939 [2024-12-06 22:12:35.623653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.939 [2024-12-06 22:12:35.623658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.939 [2024-12-06 22:12:35.623664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.939 [2024-12-06 22:12:35.623669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.940 [2024-12-06 22:12:35.623675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-06 22:12:35.623679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.940 [2024-12-06 22:12:35.623687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.940 [2024-12-06 22:12:35.623692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.940 [2024-12-06 22:12:35.623698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.940 [2024-12-06 22:12:35.623705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.940 [2024-12-06 22:12:35.623712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.940 [2024-12-06 22:12:35.623717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.940 [2024-12-06 22:12:35.623723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.940 [2024-12-06 22:12:35.623728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.940 [2024-12-06 22:12:35.623735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.940 [2024-12-06 22:12:35.623742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.940 [2024-12-06 22:12:35.623752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.940 [2024-12-06 22:12:35.623765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.940 [2024-12-06 22:12:35.623771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.940 [2024-12-06 22:12:35.623777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.940 [2024-12-06 22:12:35.623783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.940 [2024-12-06 22:12:35.623790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.940 [2024-12-06 22:12:35.623795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.940 [2024-12-06 22:12:35.623802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.940 [2024-12-06 22:12:35.623808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.940 [2024-12-06 22:12:35.623816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.940 [2024-12-06 22:12:35.623845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.940 [2024-12-06 22:12:35.623853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.940 [2024-12-06 22:12:35.623866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.940 [2024-12-06 22:12:35.623871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.940 [2024-12-06 22:12:35.623878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.940 [2024-12-06 22:12:35.623884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.940 [2024-12-06 22:12:35.623891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.940 [2024-12-06 22:12:35.623897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:22:02.940 [2024-12-06 22:12:35.623903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.940 [2024-12-06 22:12:35.623942] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:02.940 [2024-12-06 22:12:35.623953] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:07.159 [2024-12-06 22:12:39.233851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.234166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:07.159 [2024-12-06 22:12:39.234207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3609.891 ms 00:22:07.159 [2024-12-06 22:12:39.234221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.268973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.269262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.159 [2024-12-06 22:12:39.269287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.506 ms 00:22:07.159 [2024-12-06 22:12:39.269300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.269451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.269466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.159 [2024-12-06 22:12:39.269476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:07.159 [2024-12-06 22:12:39.269493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.305357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.305408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.159 [2024-12-06 22:12:39.305422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.806 ms 00:22:07.159 [2024-12-06 22:12:39.305434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.305474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.305488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.159 [2024-12-06 22:12:39.305498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.159 [2024-12-06 22:12:39.305516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.306074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.306101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.159 [2024-12-06 22:12:39.306111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:22:07.159 [2024-12-06 22:12:39.306121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.306267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.306286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.159 [2024-12-06 22:12:39.306302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:22:07.159 [2024-12-06 22:12:39.306316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.323546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.323593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.159 [2024-12-06 22:12:39.323605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:22:07.159 [2024-12-06 22:12:39.323615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.353255] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:07.159 [2024-12-06 22:12:39.357203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.357244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.159 [2024-12-06 22:12:39.357259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.496 ms 00:22:07.159 [2024-12-06 22:12:39.357268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.461621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.461686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:07.159 [2024-12-06 22:12:39.461707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.303 ms 00:22:07.159 [2024-12-06 22:12:39.461717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.461925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.461941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.159 [2024-12-06 22:12:39.461956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:22:07.159 [2024-12-06 22:12:39.461965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.487370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.487560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:07.159 [2024-12-06 22:12:39.487588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.346 ms 00:22:07.159 [2024-12-06 22:12:39.487598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.512279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.512326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:07.159 [2024-12-06 22:12:39.512342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.630 ms 00:22:07.159 [2024-12-06 22:12:39.512350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.512963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.512981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.159 [2024-12-06 22:12:39.512993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:07.159 [2024-12-06 22:12:39.513004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.599750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.599800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:07.159 [2024-12-06 22:12:39.599819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.702 ms 00:22:07.159 [2024-12-06 22:12:39.599828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.626620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.626798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:07.159 [2024-12-06 22:12:39.626825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.696 ms 00:22:07.159 [2024-12-06 22:12:39.626834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.652232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.652278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:07.159 [2024-12-06 22:12:39.652292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.204 ms 00:22:07.159 [2024-12-06 22:12:39.652300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.677847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.159 [2024-12-06 22:12:39.678023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.159 [2024-12-06 22:12:39.678050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.493 ms 00:22:07.159 [2024-12-06 22:12:39.678059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.159 [2024-12-06 22:12:39.678209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-06 22:12:39.678238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.160 [2024-12-06 22:12:39.678256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:07.160 [2024-12-06 22:12:39.678264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-06 22:12:39.678361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-06 22:12:39.678376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.160 [2024-12-06 22:12:39.678387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:07.160 [2024-12-06 22:12:39.678395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-06 22:12:39.679536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4065.754 ms, result 0 00:22:07.160 { 00:22:07.160 "name": "ftl0", 00:22:07.160 "uuid": "f39f8ea0-7cbe-473f-8a3b-f17ae415665c" 00:22:07.160 } 00:22:07.160 22:12:39 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:07.160 22:12:39 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:07.160 22:12:39 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:07.160 22:12:39 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:07.423 [2024-12-06 22:12:40.126885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.126961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.423 [2024-12-06 22:12:40.126977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.423 [2024-12-06 22:12:40.126988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.127014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.423 [2024-12-06 22:12:40.130084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.130122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.423 [2024-12-06 22:12:40.130137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:22:07.423 [2024-12-06 22:12:40.130146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.130446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.130462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.423 [2024-12-06 22:12:40.130474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:07.423 [2024-12-06 22:12:40.130482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.133732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.133755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.423 [2024-12-06 22:12:40.133767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:22:07.423 [2024-12-06 22:12:40.133775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.140071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.140110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:07.423 [2024-12-06 22:12:40.140127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.273 ms 00:22:07.423 [2024-12-06 22:12:40.140135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.166706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.166751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:07.423 [2024-12-06 22:12:40.166766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:22:07.423 [2024-12-06 22:12:40.166774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.184507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.184553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:07.423 [2024-12-06 22:12:40.184568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.676 ms 00:22:07.423 [2024-12-06 22:12:40.184576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.184763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.184775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:07.423 [2024-12-06 22:12:40.184787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:22:07.423 [2024-12-06 22:12:40.184795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.210627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.210671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:07.423 [2024-12-06 22:12:40.210685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.806 ms 00:22:07.423 [2024-12-06 22:12:40.210692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.235842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.235886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:07.423 [2024-12-06 22:12:40.235900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.099 ms 00:22:07.423 [2024-12-06 22:12:40.235908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.260371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.260413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:07.423 [2024-12-06 22:12:40.260426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.408 ms 00:22:07.423 [2024-12-06 22:12:40.260433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.284815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.423 [2024-12-06 22:12:40.284859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:07.423 [2024-12-06 22:12:40.284873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.287 ms 00:22:07.423 [2024-12-06 22:12:40.284880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.423 [2024-12-06 22:12:40.284926] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:07.423 [2024-12-06 22:12:40.284942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.284959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.284967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.284977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.284985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.284995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:07.423 [2024-12-06 22:12:40.285147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:07.424 [2024-12-06 22:12:40.285980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.285987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.285997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:07.425 [2024-12-06 22:12:40.286066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:07.425 [2024-12-06 22:12:40.286076] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:22:07.425 [2024-12-06 22:12:40.286085] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:07.425 [2024-12-06 22:12:40.286100] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:07.425 [2024-12-06 22:12:40.286110] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:07.425 [2024-12-06 22:12:40.286119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:07.425 [2024-12-06 22:12:40.286127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:07.425 [2024-12-06 22:12:40.286137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:07.425 [2024-12-06 22:12:40.286144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:07.425 [2024-12-06 22:12:40.286153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:07.425 [2024-12-06 22:12:40.286159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:07.425 [2024-12-06 22:12:40.286169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.425 [2024-12-06 22:12:40.286187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:07.425 [2024-12-06 22:12:40.286198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:22:07.425 [2024-12-06 22:12:40.286208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.299716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.686 [2024-12-06 22:12:40.299758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:07.686 [2024-12-06 22:12:40.299772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.451 ms 00:22:07.686 [2024-12-06 22:12:40.299780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.300246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.686 [2024-12-06 22:12:40.300260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:07.686 [2024-12-06 22:12:40.300274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:22:07.686 [2024-12-06 22:12:40.300282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.346441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.346662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.686 [2024-12-06 22:12:40.346688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.346697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.346769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.346779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.686 [2024-12-06 22:12:40.346793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.346801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.346888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.346898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.686 [2024-12-06 22:12:40.346909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.346916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.346938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.346947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.686 [2024-12-06 22:12:40.346957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.346967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.435103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.435163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.686 [2024-12-06 22:12:40.435200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.435209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.505406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.505605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.686 [2024-12-06 22:12:40.505632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.505644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.505771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.505783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.686 [2024-12-06 22:12:40.505795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.505803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.505860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.505871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.686 [2024-12-06 22:12:40.505882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.505890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.506001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.506012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.686 [2024-12-06 22:12:40.506022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.506030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.506074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.506084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:07.686 [2024-12-06 22:12:40.506095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.506103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.506150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.506160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.686 [2024-12-06 22:12:40.506208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.686 [2024-12-06 22:12:40.506218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.686 [2024-12-06 22:12:40.506273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.686 [2024-12-06 22:12:40.506285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.687 [2024-12-06 22:12:40.506295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.687 [2024-12-06 22:12:40.506305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.687 [2024-12-06 22:12:40.506458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.534 ms, result 0 00:22:07.687 true 00:22:07.687 22:12:40 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77355 00:22:07.687 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77355 ']' 00:22:07.687 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77355 00:22:07.687 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:07.687 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.687 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77355 00:22:07.947 killing process with pid 77355 00:22:07.947 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.947 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.947 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77355' 00:22:07.947 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77355 00:22:07.947 22:12:40 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77355 00:22:12.176 22:12:44 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:16.378 262144+0 records in 00:22:16.378 262144+0 records out 00:22:16.378 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.79193 s, 283 MB/s 00:22:16.378 22:12:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:17.321 22:12:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:17.321 [2024-12-06 22:12:50.034479] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:22:17.321 [2024-12-06 22:12:50.034571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:22:17.321 [2024-12-06 22:12:50.188684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.582 [2024-12-06 22:12:50.296298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.843 [2024-12-06 22:12:50.590118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.843 [2024-12-06 22:12:50.590216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:18.105 [2024-12-06 22:12:50.751840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.751907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:18.105 [2024-12-06 22:12:50.751923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.105 [2024-12-06 22:12:50.751932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.751983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.751996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.105 [2024-12-06 22:12:50.752005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:18.105 [2024-12-06 22:12:50.752013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.752033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:18.105 [2024-12-06 22:12:50.752758] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:18.105 [2024-12-06 22:12:50.752784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.752793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.105 [2024-12-06 22:12:50.752802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:22:18.105 [2024-12-06 22:12:50.752810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.754511] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:18.105 [2024-12-06 22:12:50.768678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.768882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:18.105 [2024-12-06 22:12:50.768905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:22:18.105 [2024-12-06 22:12:50.768913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.769013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.769025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:18.105 [2024-12-06 22:12:50.769034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:18.105 [2024-12-06 22:12:50.769042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.777112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.777157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.105 [2024-12-06 22:12:50.777168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.991 ms 00:22:18.105 [2024-12-06 22:12:50.777201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.777284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.777294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.105 [2024-12-06 22:12:50.777304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:18.105 [2024-12-06 22:12:50.777312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.777354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.777365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:18.105 [2024-12-06 22:12:50.777374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:18.105 [2024-12-06 22:12:50.777381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.777407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.105 [2024-12-06 22:12:50.781432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.781472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.105 [2024-12-06 22:12:50.781486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.030 ms 00:22:18.105 [2024-12-06 22:12:50.781494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.781533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.781542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:18.105 [2024-12-06 22:12:50.781550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:18.105 [2024-12-06 22:12:50.781557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.781608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:18.105 [2024-12-06 22:12:50.781634] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:18.105 [2024-12-06 22:12:50.781672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:18.105 [2024-12-06 22:12:50.781691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:18.105 [2024-12-06 22:12:50.781797] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:18.105 [2024-12-06 22:12:50.781808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:18.105 [2024-12-06 22:12:50.781819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:18.105 [2024-12-06 22:12:50.781829] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:18.105 [2024-12-06 22:12:50.781838] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:18.105 [2024-12-06 22:12:50.781846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:18.105 [2024-12-06 22:12:50.781854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:18.105 [2024-12-06 22:12:50.781865] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:18.105 [2024-12-06 22:12:50.781873] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:18.105 [2024-12-06 22:12:50.781881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.781889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:18.105 [2024-12-06 22:12:50.781897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:22:18.105 [2024-12-06 22:12:50.781905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.781988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.105 [2024-12-06 22:12:50.781997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:18.105 [2024-12-06 22:12:50.782005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:18.105 [2024-12-06 22:12:50.782013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.105 [2024-12-06 22:12:50.782121] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:18.105 [2024-12-06 22:12:50.782133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:18.105 [2024-12-06 22:12:50.782141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.105 [2024-12-06 22:12:50.782150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.105 [2024-12-06 22:12:50.782158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:18.105 [2024-12-06 22:12:50.782165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:18.106 [2024-12-06 22:12:50.782216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.106 [2024-12-06 22:12:50.782231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:18.106 [2024-12-06 22:12:50.782239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:18.106 [2024-12-06 22:12:50.782247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.106 [2024-12-06 22:12:50.782261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:18.106 [2024-12-06 22:12:50.782269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:18.106 [2024-12-06 22:12:50.782275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:18.106 [2024-12-06 22:12:50.782290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:18.106 [2024-12-06 22:12:50.782312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:18.106 [2024-12-06 22:12:50.782333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:18.106 [2024-12-06 22:12:50.782353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:18.106 [2024-12-06 22:12:50.782373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:18.106 [2024-12-06 22:12:50.782393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.106 [2024-12-06 22:12:50.782408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:18.106 [2024-12-06 22:12:50.782425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:18.106 [2024-12-06 22:12:50.782432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.106 [2024-12-06 22:12:50.782439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:18.106 [2024-12-06 22:12:50.782446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:18.106 [2024-12-06 22:12:50.782452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:18.106 [2024-12-06 22:12:50.782465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:18.106 [2024-12-06 22:12:50.782472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:18.106 [2024-12-06 22:12:50.782489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:18.106 [2024-12-06 22:12:50.782497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.106 [2024-12-06 22:12:50.782513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:18.106 [2024-12-06 22:12:50.782520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:18.106 [2024-12-06 22:12:50.782527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:18.106 [2024-12-06 22:12:50.782534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:18.106 [2024-12-06 22:12:50.782541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:18.106 [2024-12-06 22:12:50.782547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:18.106 [2024-12-06 22:12:50.782556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:18.106 [2024-12-06 22:12:50.782566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:18.106 [2024-12-06 22:12:50.782585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:18.106 [2024-12-06 22:12:50.782592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:18.106 [2024-12-06 22:12:50.782600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:18.106 [2024-12-06 22:12:50.782607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:18.106 [2024-12-06 22:12:50.782615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:18.106 [2024-12-06 22:12:50.782622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:18.106 [2024-12-06 22:12:50.782629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:18.106 [2024-12-06 22:12:50.782636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:18.106 [2024-12-06 22:12:50.782645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:18.106 [2024-12-06 22:12:50.782681] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:18.106 [2024-12-06 22:12:50.782689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:18.106 [2024-12-06 22:12:50.782705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:18.106 [2024-12-06 22:12:50.782712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:18.106 [2024-12-06 22:12:50.782719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:18.106 [2024-12-06 22:12:50.782729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.782738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:18.106 [2024-12-06 22:12:50.782747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:22:18.106 [2024-12-06 22:12:50.782754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.815041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.815088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.106 [2024-12-06 22:12:50.815101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.238 ms 00:22:18.106 [2024-12-06 22:12:50.815112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.815221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.815232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.106 [2024-12-06 22:12:50.815241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:18.106 [2024-12-06 22:12:50.815250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.868947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.868999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.106 [2024-12-06 22:12:50.869013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.623 ms 00:22:18.106 [2024-12-06 22:12:50.869022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.869073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.869084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.106 [2024-12-06 22:12:50.869097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.106 [2024-12-06 22:12:50.869105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.869693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.869718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.106 [2024-12-06 22:12:50.869728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:22:18.106 [2024-12-06 22:12:50.869736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.869892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.869903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.106 [2024-12-06 22:12:50.869918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:22:18.106 [2024-12-06 22:12:50.869926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.106 [2024-12-06 22:12:50.885629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.106 [2024-12-06 22:12:50.885674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.106 [2024-12-06 22:12:50.885685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.682 ms 00:22:18.107 [2024-12-06 22:12:50.885693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.107 [2024-12-06 22:12:50.899958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:18.107 [2024-12-06 22:12:50.900146] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:18.107 [2024-12-06 22:12:50.900165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.107 [2024-12-06 22:12:50.900194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:18.107 [2024-12-06 22:12:50.900205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.358 ms 00:22:18.107 [2024-12-06 22:12:50.900213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.107 [2024-12-06 22:12:50.925482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.107 [2024-12-06 22:12:50.925538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:18.107 [2024-12-06 22:12:50.925551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.222 ms 00:22:18.107 [2024-12-06 22:12:50.925559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.107 [2024-12-06 22:12:50.938244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.107 [2024-12-06 22:12:50.938426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:18.107 [2024-12-06 22:12:50.938446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.631 ms 00:22:18.107 [2024-12-06 22:12:50.938454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.107 [2024-12-06 22:12:50.950939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.107 [2024-12-06 22:12:50.950985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:18.107 [2024-12-06 22:12:50.950998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:22:18.107 [2024-12-06 22:12:50.951005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.107 [2024-12-06 22:12:50.951664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.107 [2024-12-06 22:12:50.951690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.107 [2024-12-06 22:12:50.951700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:22:18.107 [2024-12-06 22:12:50.951712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.016271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.016337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:18.367 [2024-12-06 22:12:51.016352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.538 ms 00:22:18.367 [2024-12-06 22:12:51.016367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.027544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:18.367 [2024-12-06 22:12:51.030511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.030555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.367 [2024-12-06 22:12:51.030567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:22:18.367 [2024-12-06 22:12:51.030575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.030664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.030676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:18.367 [2024-12-06 22:12:51.030685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:18.367 [2024-12-06 22:12:51.030694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.030769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.030780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.367 [2024-12-06 22:12:51.030789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:18.367 [2024-12-06 22:12:51.030798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.030821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.030829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.367 [2024-12-06 22:12:51.030838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:18.367 [2024-12-06 22:12:51.030846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.030884] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:18.367 [2024-12-06 22:12:51.030898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.030906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:18.367 [2024-12-06 22:12:51.030915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:18.367 [2024-12-06 22:12:51.030923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.056710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.056899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.367 [2024-12-06 22:12:51.056922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.767 ms 00:22:18.367 [2024-12-06 22:12:51.056938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.057016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.367 [2024-12-06 22:12:51.057027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.367 [2024-12-06 22:12:51.057036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:18.367 [2024-12-06 22:12:51.057044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.367 [2024-12-06 22:12:51.058493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.145 ms, result 0 00:22:19.310  [2024-12-06T22:12:53.126Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T22:12:54.513Z] Copying: 48/1024 [MB] (36 MBps) [2024-12-06T22:12:55.085Z] Copying: 82/1024 [MB] (34 MBps) [2024-12-06T22:12:56.545Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-06T22:12:57.118Z] Copying: 108/1024 [MB] (11 MBps) [2024-12-06T22:12:58.503Z] Copying: 119/1024 [MB] (10 MBps) [2024-12-06T22:12:59.075Z] Copying: 131/1024 [MB] (11 MBps) [2024-12-06T22:13:00.462Z] Copying: 143/1024 [MB] (12 MBps) [2024-12-06T22:13:01.407Z] Copying: 173/1024 [MB] (29 MBps) [2024-12-06T22:13:02.351Z] Copying: 215/1024 [MB] (42 MBps) [2024-12-06T22:13:03.295Z] Copying: 258/1024 [MB] (42 MBps) [2024-12-06T22:13:04.237Z] Copying: 300/1024 [MB] (42 MBps) [2024-12-06T22:13:05.180Z] Copying: 326/1024 [MB] (25 MBps) [2024-12-06T22:13:06.121Z] Copying: 339/1024 [MB] (13 MBps) [2024-12-06T22:13:07.503Z] Copying: 349/1024 [MB] (10 MBps) [2024-12-06T22:13:08.076Z] Copying: 359/1024 [MB] (10 MBps) [2024-12-06T22:13:09.463Z] Copying: 372/1024 [MB] (13 MBps) [2024-12-06T22:13:10.408Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-06T22:13:11.351Z] Copying: 393/1024 [MB] (10 MBps) [2024-12-06T22:13:12.296Z] Copying: 412/1024 [MB] (18 MBps) [2024-12-06T22:13:13.241Z] Copying: 430/1024 [MB] (18 MBps) [2024-12-06T22:13:14.187Z] Copying: 447/1024 [MB] (16 MBps) [2024-12-06T22:13:15.132Z] Copying: 467/1024 [MB] (20 MBps) [2024-12-06T22:13:16.075Z] Copying: 484/1024 [MB] (16 MBps) [2024-12-06T22:13:17.463Z] Copying: 496/1024 [MB] (12 MBps) [2024-12-06T22:13:18.408Z] Copying: 537/1024 [MB] (41 MBps) [2024-12-06T22:13:19.354Z] Copying: 556/1024 [MB] (18 MBps) [2024-12-06T22:13:20.298Z] Copying: 571/1024 [MB] (15 MBps) [2024-12-06T22:13:21.243Z] Copying: 586/1024 [MB] (14 MBps) [2024-12-06T22:13:22.188Z] Copying: 599/1024 [MB] (12 MBps) [2024-12-06T22:13:23.131Z] Copying: 616/1024 [MB] (17 MBps) [2024-12-06T22:13:24.077Z] Copying: 636/1024 [MB] (19 MBps) [2024-12-06T22:13:25.452Z] Copying: 650/1024 [MB] (14 MBps) [2024-12-06T22:13:26.394Z] Copying: 674/1024 [MB] (23 MBps) [2024-12-06T22:13:27.335Z] Copying: 691/1024 [MB] (17 MBps) [2024-12-06T22:13:28.276Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-06T22:13:29.217Z] Copying: 713/1024 [MB] (11 MBps) [2024-12-06T22:13:30.220Z] Copying: 729/1024 [MB] (15 MBps) [2024-12-06T22:13:31.164Z] Copying: 741/1024 [MB] (12 MBps) [2024-12-06T22:13:32.123Z] Copying: 766/1024 [MB] (24 MBps) [2024-12-06T22:13:33.511Z] Copying: 796/1024 [MB] (29 MBps) [2024-12-06T22:13:34.084Z] Copying: 812/1024 [MB] (16 MBps) [2024-12-06T22:13:35.471Z] Copying: 825/1024 [MB] (13 MBps) [2024-12-06T22:13:36.413Z] Copying: 837/1024 [MB] (11 MBps) [2024-12-06T22:13:37.358Z] Copying: 851/1024 [MB] (14 MBps) [2024-12-06T22:13:38.301Z] Copying: 871/1024 [MB] (20 MBps) [2024-12-06T22:13:39.246Z] Copying: 888/1024 [MB] (16 MBps) [2024-12-06T22:13:40.190Z] Copying: 901/1024 [MB] (13 MBps) [2024-12-06T22:13:41.135Z] Copying: 921/1024 [MB] (19 MBps) [2024-12-06T22:13:42.080Z] Copying: 932/1024 [MB] (11 MBps) [2024-12-06T22:13:43.466Z] Copying: 948/1024 [MB] (15 MBps) [2024-12-06T22:13:44.410Z] Copying: 966/1024 [MB] (17 MBps) [2024-12-06T22:13:45.352Z] Copying: 983/1024 [MB] (16 MBps) [2024-12-06T22:13:46.297Z] Copying: 1001/1024 [MB] (17 MBps) [2024-12-06T22:13:47.244Z] Copying: 1014/1024 [MB] (13 MBps) [2024-12-06T22:13:47.244Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-06 22:13:46.897409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.897468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:14.372 [2024-12-06 22:13:46.897487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:14.372 [2024-12-06 22:13:46.897497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.897522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.372 [2024-12-06 22:13:46.900632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.900890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:14.372 [2024-12-06 22:13:46.900923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:23:14.372 [2024-12-06 22:13:46.900931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.904393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.904440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:14.372 [2024-12-06 22:13:46.904451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:23:14.372 [2024-12-06 22:13:46.904459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.923435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.923598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:14.372 [2024-12-06 22:13:46.923618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.958 ms 00:23:14.372 [2024-12-06 22:13:46.923627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.929933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.929978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:14.372 [2024-12-06 22:13:46.929990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:23:14.372 [2024-12-06 22:13:46.929998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.956714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.956761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:14.372 [2024-12-06 22:13:46.956774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.655 ms 00:23:14.372 [2024-12-06 22:13:46.956782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.972646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.972813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:14.372 [2024-12-06 22:13:46.972834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.817 ms 00:23:14.372 [2024-12-06 22:13:46.972842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.972995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.973010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:14.372 [2024-12-06 22:13:46.973020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:23:14.372 [2024-12-06 22:13:46.973028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:46.999126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:46.999167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:14.372 [2024-12-06 22:13:46.999200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.083 ms 00:23:14.372 [2024-12-06 22:13:46.999207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.372 [2024-12-06 22:13:47.024271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.372 [2024-12-06 22:13:47.024314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:14.373 [2024-12-06 22:13:47.024326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:23:14.373 [2024-12-06 22:13:47.024333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.373 [2024-12-06 22:13:47.049191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.373 [2024-12-06 22:13:47.049231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:14.373 [2024-12-06 22:13:47.049242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:23:14.373 [2024-12-06 22:13:47.049249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.373 [2024-12-06 22:13:47.073516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.373 [2024-12-06 22:13:47.073559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:14.373 [2024-12-06 22:13:47.073571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.196 ms 00:23:14.373 [2024-12-06 22:13:47.073579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.373 [2024-12-06 22:13:47.073624] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:14.373 [2024-12-06 22:13:47.073641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.073995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:14.373 [2024-12-06 22:13:47.074369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:14.374 [2024-12-06 22:13:47.074552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:14.374 [2024-12-06 22:13:47.074565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:23:14.374 [2024-12-06 22:13:47.074573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:14.374 [2024-12-06 22:13:47.074580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:14.374 [2024-12-06 22:13:47.074588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:14.374 [2024-12-06 22:13:47.074596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:14.374 [2024-12-06 22:13:47.074604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:14.374 [2024-12-06 22:13:47.074620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:14.374 [2024-12-06 22:13:47.074628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:14.374 [2024-12-06 22:13:47.074635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:14.374 [2024-12-06 22:13:47.074642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:14.374 [2024-12-06 22:13:47.074650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.374 [2024-12-06 22:13:47.074657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:14.374 [2024-12-06 22:13:47.074666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:23:14.374 [2024-12-06 22:13:47.074674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.088434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.374 [2024-12-06 22:13:47.088475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:14.374 [2024-12-06 22:13:47.088486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.731 ms 00:23:14.374 [2024-12-06 22:13:47.088494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.088899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.374 [2024-12-06 22:13:47.088917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:14.374 [2024-12-06 22:13:47.088926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:23:14.374 [2024-12-06 22:13:47.088942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.125103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.374 [2024-12-06 22:13:47.125149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.374 [2024-12-06 22:13:47.125161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.374 [2024-12-06 22:13:47.125169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.125251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.374 [2024-12-06 22:13:47.125259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.374 [2024-12-06 22:13:47.125269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.374 [2024-12-06 22:13:47.125281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.125364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.374 [2024-12-06 22:13:47.125375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.374 [2024-12-06 22:13:47.125383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.374 [2024-12-06 22:13:47.125391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.125407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.374 [2024-12-06 22:13:47.125417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.374 [2024-12-06 22:13:47.125425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.374 [2024-12-06 22:13:47.125432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.374 [2024-12-06 22:13:47.209306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.374 [2024-12-06 22:13:47.209355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.374 [2024-12-06 22:13:47.209368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.374 [2024-12-06 22:13:47.209377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.635 [2024-12-06 22:13:47.278343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.635 [2024-12-06 22:13:47.278467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.635 [2024-12-06 22:13:47.278534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.635 [2024-12-06 22:13:47.278660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:14.635 [2024-12-06 22:13:47.278727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.635 [2024-12-06 22:13:47.278803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.635 [2024-12-06 22:13:47.278859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.635 [2024-12-06 22:13:47.278869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.635 [2024-12-06 22:13:47.278878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.635 [2024-12-06 22:13:47.278886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.636 [2024-12-06 22:13:47.279022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.575 ms, result 0 00:23:15.208 00:23:15.208 00:23:15.208 22:13:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:15.208 [2024-12-06 22:13:48.019410] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:23:15.208 [2024-12-06 22:13:48.019534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78175 ] 00:23:15.470 [2024-12-06 22:13:48.174611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.470 [2024-12-06 22:13:48.261354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.731 [2024-12-06 22:13:48.469668] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:15.731 [2024-12-06 22:13:48.469720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:15.994 [2024-12-06 22:13:48.625914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.625965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:15.994 [2024-12-06 22:13:48.625979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.994 [2024-12-06 22:13:48.625987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.626036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.626049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.994 [2024-12-06 22:13:48.626057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:15.994 [2024-12-06 22:13:48.626065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.626084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:15.994 [2024-12-06 22:13:48.626759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:15.994 [2024-12-06 22:13:48.626781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.626789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.994 [2024-12-06 22:13:48.626797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:23:15.994 [2024-12-06 22:13:48.626804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.628011] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:15.994 [2024-12-06 22:13:48.641104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.641143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:15.994 [2024-12-06 22:13:48.641155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.095 ms 00:23:15.994 [2024-12-06 22:13:48.641163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.641242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.641252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:15.994 [2024-12-06 22:13:48.641260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:15.994 [2024-12-06 22:13:48.641268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.647480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.647644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.994 [2024-12-06 22:13:48.647660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:23:15.994 [2024-12-06 22:13:48.647674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.647754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.647763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.994 [2024-12-06 22:13:48.647772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:15.994 [2024-12-06 22:13:48.647779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.647831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.647842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:15.994 [2024-12-06 22:13:48.647850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:15.994 [2024-12-06 22:13:48.647857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.647881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.994 [2024-12-06 22:13:48.651362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.651394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.994 [2024-12-06 22:13:48.651407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:23:15.994 [2024-12-06 22:13:48.651415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.651449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.994 [2024-12-06 22:13:48.651458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:15.994 [2024-12-06 22:13:48.651465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:15.994 [2024-12-06 22:13:48.651473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.994 [2024-12-06 22:13:48.651494] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:15.994 [2024-12-06 22:13:48.651514] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:15.994 [2024-12-06 22:13:48.651550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:15.994 [2024-12-06 22:13:48.651567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:15.994 [2024-12-06 22:13:48.651671] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:15.994 [2024-12-06 22:13:48.651682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:15.994 [2024-12-06 22:13:48.651692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:15.994 [2024-12-06 22:13:48.651702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:15.994 [2024-12-06 22:13:48.651711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:15.994 [2024-12-06 22:13:48.651719] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:15.994 [2024-12-06 22:13:48.651727] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:15.995 [2024-12-06 22:13:48.651736] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:15.995 [2024-12-06 22:13:48.651744] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:15.995 [2024-12-06 22:13:48.651752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.995 [2024-12-06 22:13:48.651759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:15.995 [2024-12-06 22:13:48.651767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:23:15.995 [2024-12-06 22:13:48.651775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.995 [2024-12-06 22:13:48.651857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.995 [2024-12-06 22:13:48.651865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:15.995 [2024-12-06 22:13:48.651872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:15.995 [2024-12-06 22:13:48.651880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.995 [2024-12-06 22:13:48.651999] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:15.995 [2024-12-06 22:13:48.652010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:15.995 [2024-12-06 22:13:48.652018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:15.995 [2024-12-06 22:13:48.652041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:15.995 [2024-12-06 22:13:48.652076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.995 [2024-12-06 22:13:48.652090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:15.995 [2024-12-06 22:13:48.652097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:15.995 [2024-12-06 22:13:48.652103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.995 [2024-12-06 22:13:48.652116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:15.995 [2024-12-06 22:13:48.652124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:15.995 [2024-12-06 22:13:48.652131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:15.995 [2024-12-06 22:13:48.652145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:15.995 [2024-12-06 22:13:48.652166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:15.995 [2024-12-06 22:13:48.652202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:15.995 [2024-12-06 22:13:48.652222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:15.995 [2024-12-06 22:13:48.652243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:15.995 [2024-12-06 22:13:48.652264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.995 [2024-12-06 22:13:48.652277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:15.995 [2024-12-06 22:13:48.652284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:15.995 [2024-12-06 22:13:48.652299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.995 [2024-12-06 22:13:48.652306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:15.995 [2024-12-06 22:13:48.652313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:15.995 [2024-12-06 22:13:48.652320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:15.995 [2024-12-06 22:13:48.652332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:15.995 [2024-12-06 22:13:48.652339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:15.995 [2024-12-06 22:13:48.652353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:15.995 [2024-12-06 22:13:48.652360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.995 [2024-12-06 22:13:48.652377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:15.995 [2024-12-06 22:13:48.652383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:15.995 [2024-12-06 22:13:48.652390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:15.995 [2024-12-06 22:13:48.652397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:15.995 [2024-12-06 22:13:48.652404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:15.995 [2024-12-06 22:13:48.652410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:15.995 [2024-12-06 22:13:48.652418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:15.995 [2024-12-06 22:13:48.652427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:15.995 [2024-12-06 22:13:48.652447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:15.995 [2024-12-06 22:13:48.652454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:15.995 [2024-12-06 22:13:48.652461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:15.995 [2024-12-06 22:13:48.652469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:15.995 [2024-12-06 22:13:48.652476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:15.995 [2024-12-06 22:13:48.652483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:15.995 [2024-12-06 22:13:48.652490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:15.995 [2024-12-06 22:13:48.652497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:15.995 [2024-12-06 22:13:48.652505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:15.995 [2024-12-06 22:13:48.652539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:15.995 [2024-12-06 22:13:48.652546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:15.995 [2024-12-06 22:13:48.652561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:15.995 [2024-12-06 22:13:48.652568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:15.995 [2024-12-06 22:13:48.652575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:15.995 [2024-12-06 22:13:48.652582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.995 [2024-12-06 22:13:48.652590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:15.995 [2024-12-06 22:13:48.652598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:23:15.995 [2024-12-06 22:13:48.652609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.995 [2024-12-06 22:13:48.682485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.995 [2024-12-06 22:13:48.682680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.995 [2024-12-06 22:13:48.682699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.829 ms 00:23:15.995 [2024-12-06 22:13:48.682715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.995 [2024-12-06 22:13:48.682808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.995 [2024-12-06 22:13:48.682817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:15.995 [2024-12-06 22:13:48.682826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:15.995 [2024-12-06 22:13:48.682834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.995 [2024-12-06 22:13:48.724740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.724791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.996 [2024-12-06 22:13:48.724804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.844 ms 00:23:15.996 [2024-12-06 22:13:48.724813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.724863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.724873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.996 [2024-12-06 22:13:48.724886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.996 [2024-12-06 22:13:48.724894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.725535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.725559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.996 [2024-12-06 22:13:48.725570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:23:15.996 [2024-12-06 22:13:48.725578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.725735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.725746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.996 [2024-12-06 22:13:48.725761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:15.996 [2024-12-06 22:13:48.725769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.741378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.741423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.996 [2024-12-06 22:13:48.741435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.589 ms 00:23:15.996 [2024-12-06 22:13:48.741443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.755542] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:15.996 [2024-12-06 22:13:48.755588] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.996 [2024-12-06 22:13:48.755602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.755610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.996 [2024-12-06 22:13:48.755620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.048 ms 00:23:15.996 [2024-12-06 22:13:48.755628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.781336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.781385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.996 [2024-12-06 22:13:48.781397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.656 ms 00:23:15.996 [2024-12-06 22:13:48.781406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.794402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.794445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.996 [2024-12-06 22:13:48.794456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.933 ms 00:23:15.996 [2024-12-06 22:13:48.794464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.807101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.807157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.996 [2024-12-06 22:13:48.807169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.590 ms 00:23:15.996 [2024-12-06 22:13:48.807192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.996 [2024-12-06 22:13:48.807845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.996 [2024-12-06 22:13:48.807875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.996 [2024-12-06 22:13:48.807889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:23:15.996 [2024-12-06 22:13:48.807897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.872725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.872792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:16.258 [2024-12-06 22:13:48.872815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.808 ms 00:23:16.258 [2024-12-06 22:13:48.872824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.884251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:16.258 [2024-12-06 22:13:48.887349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.887392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:16.258 [2024-12-06 22:13:48.887405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.465 ms 00:23:16.258 [2024-12-06 22:13:48.887415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.887509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.887520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:16.258 [2024-12-06 22:13:48.887533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:16.258 [2024-12-06 22:13:48.887542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.887613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.887624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.258 [2024-12-06 22:13:48.887633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:16.258 [2024-12-06 22:13:48.887642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.887663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.887672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.258 [2024-12-06 22:13:48.887680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:16.258 [2024-12-06 22:13:48.887689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.887728] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:16.258 [2024-12-06 22:13:48.887740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.887748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:16.258 [2024-12-06 22:13:48.887756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:16.258 [2024-12-06 22:13:48.887764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.913869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.913918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.258 [2024-12-06 22:13:48.913938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.084 ms 00:23:16.258 [2024-12-06 22:13:48.913946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.914032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.258 [2024-12-06 22:13:48.914043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.258 [2024-12-06 22:13:48.914053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:16.258 [2024-12-06 22:13:48.914061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.258 [2024-12-06 22:13:48.915493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.055 ms, result 0 00:23:17.646  [2024-12-06T22:13:51.458Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-06T22:13:52.398Z] Copying: 28/1024 [MB] (14 MBps) [2024-12-06T22:13:53.339Z] Copying: 41/1024 [MB] (12 MBps) [2024-12-06T22:13:54.277Z] Copying: 51/1024 [MB] (10 MBps) [2024-12-06T22:13:55.219Z] Copying: 62/1024 [MB] (10 MBps) [2024-12-06T22:13:56.169Z] Copying: 72/1024 [MB] (10 MBps) [2024-12-06T22:13:57.113Z] Copying: 83/1024 [MB] (10 MBps) [2024-12-06T22:13:58.500Z] Copying: 103/1024 [MB] (19 MBps) [2024-12-06T22:13:59.443Z] Copying: 116/1024 [MB] (13 MBps) [2024-12-06T22:14:00.414Z] Copying: 128/1024 [MB] (12 MBps) [2024-12-06T22:14:01.359Z] Copying: 139/1024 [MB] (10 MBps) [2024-12-06T22:14:02.303Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-06T22:14:03.260Z] Copying: 160/1024 [MB] (10 MBps) [2024-12-06T22:14:04.202Z] Copying: 171/1024 [MB] (10 MBps) [2024-12-06T22:14:05.143Z] Copying: 182/1024 [MB] (10 MBps) [2024-12-06T22:14:06.526Z] Copying: 193/1024 [MB] (10 MBps) [2024-12-06T22:14:07.468Z] Copying: 210/1024 [MB] (17 MBps) [2024-12-06T22:14:08.413Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-06T22:14:09.357Z] Copying: 250/1024 [MB] (16 MBps) [2024-12-06T22:14:10.302Z] Copying: 266/1024 [MB] (15 MBps) [2024-12-06T22:14:11.248Z] Copying: 282/1024 [MB] (16 MBps) [2024-12-06T22:14:12.191Z] Copying: 298/1024 [MB] (15 MBps) [2024-12-06T22:14:13.130Z] Copying: 317/1024 [MB] (18 MBps) [2024-12-06T22:14:14.512Z] Copying: 333/1024 [MB] (16 MBps) [2024-12-06T22:14:15.451Z] Copying: 353/1024 [MB] (20 MBps) [2024-12-06T22:14:16.389Z] Copying: 368/1024 [MB] (14 MBps) [2024-12-06T22:14:17.334Z] Copying: 385/1024 [MB] (17 MBps) [2024-12-06T22:14:18.280Z] Copying: 401/1024 [MB] (16 MBps) [2024-12-06T22:14:19.224Z] Copying: 423/1024 [MB] (21 MBps) [2024-12-06T22:14:20.169Z] Copying: 445/1024 [MB] (21 MBps) [2024-12-06T22:14:21.114Z] Copying: 459/1024 [MB] (13 MBps) [2024-12-06T22:14:22.500Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-06T22:14:23.445Z] Copying: 480/1024 [MB] (10 MBps) [2024-12-06T22:14:24.388Z] Copying: 490/1024 [MB] (10 MBps) [2024-12-06T22:14:25.328Z] Copying: 503/1024 [MB] (12 MBps) [2024-12-06T22:14:26.270Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-06T22:14:27.213Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-06T22:14:28.157Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-06T22:14:29.102Z] Copying: 549/1024 [MB] (12 MBps) [2024-12-06T22:14:30.487Z] Copying: 567/1024 [MB] (18 MBps) [2024-12-06T22:14:31.431Z] Copying: 580/1024 [MB] (12 MBps) [2024-12-06T22:14:32.492Z] Copying: 592/1024 [MB] (11 MBps) [2024-12-06T22:14:33.430Z] Copying: 602/1024 [MB] (10 MBps) [2024-12-06T22:14:34.374Z] Copying: 615/1024 [MB] (12 MBps) [2024-12-06T22:14:35.318Z] Copying: 625/1024 [MB] (10 MBps) [2024-12-06T22:14:36.258Z] Copying: 651/1024 [MB] (26 MBps) [2024-12-06T22:14:37.200Z] Copying: 662/1024 [MB] (10 MBps) [2024-12-06T22:14:38.164Z] Copying: 675/1024 [MB] (13 MBps) [2024-12-06T22:14:39.106Z] Copying: 692/1024 [MB] (17 MBps) [2024-12-06T22:14:40.492Z] Copying: 713/1024 [MB] (20 MBps) [2024-12-06T22:14:41.437Z] Copying: 725/1024 [MB] (11 MBps) [2024-12-06T22:14:42.382Z] Copying: 741/1024 [MB] (15 MBps) [2024-12-06T22:14:43.326Z] Copying: 756/1024 [MB] (15 MBps) [2024-12-06T22:14:44.268Z] Copying: 772/1024 [MB] (15 MBps) [2024-12-06T22:14:45.211Z] Copying: 794/1024 [MB] (22 MBps) [2024-12-06T22:14:46.152Z] Copying: 809/1024 [MB] (15 MBps) [2024-12-06T22:14:47.539Z] Copying: 828/1024 [MB] (19 MBps) [2024-12-06T22:14:48.112Z] Copying: 850/1024 [MB] (21 MBps) [2024-12-06T22:14:49.501Z] Copying: 870/1024 [MB] (20 MBps) [2024-12-06T22:14:50.446Z] Copying: 891/1024 [MB] (20 MBps) [2024-12-06T22:14:51.393Z] Copying: 906/1024 [MB] (15 MBps) [2024-12-06T22:14:52.336Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-06T22:14:53.280Z] Copying: 934/1024 [MB] (15 MBps) [2024-12-06T22:14:54.224Z] Copying: 951/1024 [MB] (16 MBps) [2024-12-06T22:14:55.169Z] Copying: 970/1024 [MB] (19 MBps) [2024-12-06T22:14:56.111Z] Copying: 989/1024 [MB] (18 MBps) [2024-12-06T22:14:57.492Z] Copying: 1011/1024 [MB] (22 MBps) [2024-12-06T22:14:57.492Z] Copying: 1022/1024 [MB] (10 MBps) [2024-12-06T22:14:57.753Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 22:14:57.651065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.881 [2024-12-06 22:14:57.651196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.881 [2024-12-06 22:14:57.651224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:24.881 [2024-12-06 22:14:57.651240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.881 [2024-12-06 22:14:57.651283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.882 [2024-12-06 22:14:57.657248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.657316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.882 [2024-12-06 22:14:57.657334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:24:24.882 [2024-12-06 22:14:57.657346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.657705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.657730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.882 [2024-12-06 22:14:57.657745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:24:24.882 [2024-12-06 22:14:57.657757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.663396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.663432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.882 [2024-12-06 22:14:57.663446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.617 ms 00:24:24.882 [2024-12-06 22:14:57.663465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.670094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.670141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.882 [2024-12-06 22:14:57.670153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.605 ms 00:24:24.882 [2024-12-06 22:14:57.670162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.697273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.697326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:24.882 [2024-12-06 22:14:57.697341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.033 ms 00:24:24.882 [2024-12-06 22:14:57.697349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.713462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.713513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:24.882 [2024-12-06 22:14:57.713527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.063 ms 00:24:24.882 [2024-12-06 22:14:57.713536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.713696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.713708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.882 [2024-12-06 22:14:57.713719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:24.882 [2024-12-06 22:14:57.713728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.882 [2024-12-06 22:14:57.740395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.882 [2024-12-06 22:14:57.740445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:24.882 [2024-12-06 22:14:57.740456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.651 ms 00:24:24.882 [2024-12-06 22:14:57.740464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.143 [2024-12-06 22:14:57.766203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.143 [2024-12-06 22:14:57.766256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.143 [2024-12-06 22:14:57.766268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.698 ms 00:24:25.143 [2024-12-06 22:14:57.766276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.143 [2024-12-06 22:14:57.791293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.143 [2024-12-06 22:14:57.791339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.143 [2024-12-06 22:14:57.791352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.970 ms 00:24:25.143 [2024-12-06 22:14:57.791360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.143 [2024-12-06 22:14:57.815990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.143 [2024-12-06 22:14:57.816040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.143 [2024-12-06 22:14:57.816052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.556 ms 00:24:25.143 [2024-12-06 22:14:57.816060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.143 [2024-12-06 22:14:57.816113] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.143 [2024-12-06 22:14:57.816137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.143 [2024-12-06 22:14:57.816228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.144 [2024-12-06 22:14:57.816779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.145 [2024-12-06 22:14:57.816948] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.145 [2024-12-06 22:14:57.816956] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:24:25.145 [2024-12-06 22:14:57.816964] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.145 [2024-12-06 22:14:57.816971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:25.145 [2024-12-06 22:14:57.816978] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.145 [2024-12-06 22:14:57.816986] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.145 [2024-12-06 22:14:57.817001] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.145 [2024-12-06 22:14:57.817009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.145 [2024-12-06 22:14:57.817017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.145 [2024-12-06 22:14:57.817024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.145 [2024-12-06 22:14:57.817031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.145 [2024-12-06 22:14:57.817038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.145 [2024-12-06 22:14:57.817047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.145 [2024-12-06 22:14:57.817056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:24:25.145 [2024-12-06 22:14:57.817067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.830467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.145 [2024-12-06 22:14:57.830515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.145 [2024-12-06 22:14:57.830527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.380 ms 00:24:25.145 [2024-12-06 22:14:57.830535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.830933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.145 [2024-12-06 22:14:57.830958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.145 [2024-12-06 22:14:57.830976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:24:25.145 [2024-12-06 22:14:57.830984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.867421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.145 [2024-12-06 22:14:57.867472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.145 [2024-12-06 22:14:57.867484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.145 [2024-12-06 22:14:57.867494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.867559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.145 [2024-12-06 22:14:57.867570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.145 [2024-12-06 22:14:57.867585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.145 [2024-12-06 22:14:57.867594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.867680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.145 [2024-12-06 22:14:57.867692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.145 [2024-12-06 22:14:57.867702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.145 [2024-12-06 22:14:57.867712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.867730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.145 [2024-12-06 22:14:57.867739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.145 [2024-12-06 22:14:57.867748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.145 [2024-12-06 22:14:57.867760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.145 [2024-12-06 22:14:57.954071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.145 [2024-12-06 22:14:57.954129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.145 [2024-12-06 22:14:57.954143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.145 [2024-12-06 22:14:57.954151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.404 [2024-12-06 22:14:58.024468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.404 [2024-12-06 22:14:58.024529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.404 [2024-12-06 22:14:58.024548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.404 [2024-12-06 22:14:58.024557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.404 [2024-12-06 22:14:58.024617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.404 [2024-12-06 22:14:58.024627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.405 [2024-12-06 22:14:58.024636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.024644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.024701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.405 [2024-12-06 22:14:58.024712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.405 [2024-12-06 22:14:58.024721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.024729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.024829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.405 [2024-12-06 22:14:58.024839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.405 [2024-12-06 22:14:58.024848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.024856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.024890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.405 [2024-12-06 22:14:58.024899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.405 [2024-12-06 22:14:58.024908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.024916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.024963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.405 [2024-12-06 22:14:58.024973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.405 [2024-12-06 22:14:58.024982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.024989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.025037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.405 [2024-12-06 22:14:58.025048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.405 [2024-12-06 22:14:58.025057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.405 [2024-12-06 22:14:58.025065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.405 [2024-12-06 22:14:58.025235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.121 ms, result 0 00:24:25.974 00:24:25.974 00:24:25.974 22:14:58 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.518 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:28.518 22:15:00 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:28.518 [2024-12-06 22:15:00.980619] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:24:28.518 [2024-12-06 22:15:00.980711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78922 ] 00:24:28.518 [2024-12-06 22:15:01.132022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.518 [2024-12-06 22:15:01.254351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.779 [2024-12-06 22:15:01.550096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.779 [2024-12-06 22:15:01.550205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:29.040 [2024-12-06 22:15:01.712148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.040 [2024-12-06 22:15:01.712227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:29.040 [2024-12-06 22:15:01.712243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.040 [2024-12-06 22:15:01.712252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.040 [2024-12-06 22:15:01.712312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.040 [2024-12-06 22:15:01.712325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.040 [2024-12-06 22:15:01.712334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:29.040 [2024-12-06 22:15:01.712342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.040 [2024-12-06 22:15:01.712364] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:29.040 [2024-12-06 22:15:01.713246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:29.040 [2024-12-06 22:15:01.713286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.040 [2024-12-06 22:15:01.713295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.040 [2024-12-06 22:15:01.713306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:24:29.040 [2024-12-06 22:15:01.713314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.040 [2024-12-06 22:15:01.715464] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:29.040 [2024-12-06 22:15:01.729937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.040 [2024-12-06 22:15:01.729997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:29.040 [2024-12-06 22:15:01.730012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.476 ms 00:24:29.040 [2024-12-06 22:15:01.730021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.730114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.730126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:29.041 [2024-12-06 22:15:01.730136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:29.041 [2024-12-06 22:15:01.730143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.738692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.738738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.041 [2024-12-06 22:15:01.738751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:24:29.041 [2024-12-06 22:15:01.738766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.738852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.738862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.041 [2024-12-06 22:15:01.738871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:29.041 [2024-12-06 22:15:01.738879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.738926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.738937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:29.041 [2024-12-06 22:15:01.738946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:29.041 [2024-12-06 22:15:01.738954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.738983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:29.041 [2024-12-06 22:15:01.742968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.743012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.041 [2024-12-06 22:15:01.743026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:24:29.041 [2024-12-06 22:15:01.743035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.743074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.743084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:29.041 [2024-12-06 22:15:01.743093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:29.041 [2024-12-06 22:15:01.743101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.743155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:29.041 [2024-12-06 22:15:01.743195] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:29.041 [2024-12-06 22:15:01.743233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:29.041 [2024-12-06 22:15:01.743253] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:29.041 [2024-12-06 22:15:01.743360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:29.041 [2024-12-06 22:15:01.743372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:29.041 [2024-12-06 22:15:01.743384] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:29.041 [2024-12-06 22:15:01.743394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743413] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:29.041 [2024-12-06 22:15:01.743421] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:29.041 [2024-12-06 22:15:01.743432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:29.041 [2024-12-06 22:15:01.743442] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:29.041 [2024-12-06 22:15:01.743450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.743460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:29.041 [2024-12-06 22:15:01.743468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:29.041 [2024-12-06 22:15:01.743476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.743559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.041 [2024-12-06 22:15:01.743578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:29.041 [2024-12-06 22:15:01.743586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:29.041 [2024-12-06 22:15:01.743594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.041 [2024-12-06 22:15:01.743703] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:29.041 [2024-12-06 22:15:01.743715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:29.041 [2024-12-06 22:15:01.743724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:29.041 [2024-12-06 22:15:01.743749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:29.041 [2024-12-06 22:15:01.743769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.041 [2024-12-06 22:15:01.743784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:29.041 [2024-12-06 22:15:01.743791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:29.041 [2024-12-06 22:15:01.743798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.041 [2024-12-06 22:15:01.743811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:29.041 [2024-12-06 22:15:01.743818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:29.041 [2024-12-06 22:15:01.743825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:29.041 [2024-12-06 22:15:01.743842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:29.041 [2024-12-06 22:15:01.743863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:29.041 [2024-12-06 22:15:01.743884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:29.041 [2024-12-06 22:15:01.743904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:29.041 [2024-12-06 22:15:01.743924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.041 [2024-12-06 22:15:01.743938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:29.041 [2024-12-06 22:15:01.743945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.041 [2024-12-06 22:15:01.743958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:29.041 [2024-12-06 22:15:01.743964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:29.041 [2024-12-06 22:15:01.743971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.041 [2024-12-06 22:15:01.743978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:29.041 [2024-12-06 22:15:01.743984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:29.041 [2024-12-06 22:15:01.743991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.743998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:29.041 [2024-12-06 22:15:01.744004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:29.041 [2024-12-06 22:15:01.744011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.744018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:29.041 [2024-12-06 22:15:01.744025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:29.041 [2024-12-06 22:15:01.744033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.041 [2024-12-06 22:15:01.744040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.041 [2024-12-06 22:15:01.744048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:29.041 [2024-12-06 22:15:01.744056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:29.041 [2024-12-06 22:15:01.744065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:29.041 [2024-12-06 22:15:01.744097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:29.041 [2024-12-06 22:15:01.744105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:29.041 [2024-12-06 22:15:01.744112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:29.041 [2024-12-06 22:15:01.744121] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:29.041 [2024-12-06 22:15:01.744131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.041 [2024-12-06 22:15:01.744144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:29.041 [2024-12-06 22:15:01.744153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:29.041 [2024-12-06 22:15:01.744162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:29.041 [2024-12-06 22:15:01.744194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:29.041 [2024-12-06 22:15:01.744204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:29.041 [2024-12-06 22:15:01.744212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:29.041 [2024-12-06 22:15:01.744221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:29.041 [2024-12-06 22:15:01.744229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:29.041 [2024-12-06 22:15:01.744237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:29.041 [2024-12-06 22:15:01.744245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:29.041 [2024-12-06 22:15:01.744252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:29.041 [2024-12-06 22:15:01.744260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:29.042 [2024-12-06 22:15:01.744268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:29.042 [2024-12-06 22:15:01.744275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:29.042 [2024-12-06 22:15:01.744283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:29.042 [2024-12-06 22:15:01.744292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.042 [2024-12-06 22:15:01.744301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:29.042 [2024-12-06 22:15:01.744308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:29.042 [2024-12-06 22:15:01.744316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:29.042 [2024-12-06 22:15:01.744324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:29.042 [2024-12-06 22:15:01.744332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.744340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:29.042 [2024-12-06 22:15:01.744348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:24:29.042 [2024-12-06 22:15:01.744356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.776857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.776912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.042 [2024-12-06 22:15:01.776925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.451 ms 00:24:29.042 [2024-12-06 22:15:01.776938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.777031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.777041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:29.042 [2024-12-06 22:15:01.777050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:29.042 [2024-12-06 22:15:01.777057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.826556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.826612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.042 [2024-12-06 22:15:01.826626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.435 ms 00:24:29.042 [2024-12-06 22:15:01.826635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.826687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.826698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.042 [2024-12-06 22:15:01.826711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:29.042 [2024-12-06 22:15:01.826719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.827385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.827423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.042 [2024-12-06 22:15:01.827434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:24:29.042 [2024-12-06 22:15:01.827443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.827607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.827618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.042 [2024-12-06 22:15:01.827633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:29.042 [2024-12-06 22:15:01.827641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.843632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.843685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.042 [2024-12-06 22:15:01.843697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.971 ms 00:24:29.042 [2024-12-06 22:15:01.843705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.858347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:29.042 [2024-12-06 22:15:01.858399] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.042 [2024-12-06 22:15:01.858413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.858423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.042 [2024-12-06 22:15:01.858433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:24:29.042 [2024-12-06 22:15:01.858441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.884658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.884713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.042 [2024-12-06 22:15:01.884725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.157 ms 00:24:29.042 [2024-12-06 22:15:01.884733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.042 [2024-12-06 22:15:01.898058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.042 [2024-12-06 22:15:01.898109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.042 [2024-12-06 22:15:01.898123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.252 ms 00:24:29.042 [2024-12-06 22:15:01.898130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.911262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.911314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.303 [2024-12-06 22:15:01.911326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.067 ms 00:24:29.303 [2024-12-06 22:15:01.911334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.911992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.912027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.303 [2024-12-06 22:15:01.912041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:24:29.303 [2024-12-06 22:15:01.912050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.977966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.978027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.303 [2024-12-06 22:15:01.978049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.896 ms 00:24:29.303 [2024-12-06 22:15:01.978058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.988944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:29.303 [2024-12-06 22:15:01.991817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.991858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.303 [2024-12-06 22:15:01.991871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.705 ms 00:24:29.303 [2024-12-06 22:15:01.991879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.991962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.991973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.303 [2024-12-06 22:15:01.991986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:29.303 [2024-12-06 22:15:01.991994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.992081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.992092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.303 [2024-12-06 22:15:01.992101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:29.303 [2024-12-06 22:15:01.992109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.992130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.992139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.303 [2024-12-06 22:15:01.992147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.303 [2024-12-06 22:15:01.992156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:01.992209] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.303 [2024-12-06 22:15:01.992220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:01.992229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.303 [2024-12-06 22:15:01.992238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:29.303 [2024-12-06 22:15:01.992246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:02.018360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:02.018411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.303 [2024-12-06 22:15:02.018431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.093 ms 00:24:29.303 [2024-12-06 22:15:02.018441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:02.018532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-12-06 22:15:02.018544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.303 [2024-12-06 22:15:02.018553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:29.303 [2024-12-06 22:15:02.018562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-12-06 22:15:02.019875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.223 ms, result 0 00:24:30.244  [2024-12-06T22:15:04.106Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T22:15:05.048Z] Copying: 36/1024 [MB] (24 MBps) [2024-12-06T22:15:06.427Z] Copying: 53/1024 [MB] (16 MBps) [2024-12-06T22:15:07.368Z] Copying: 72/1024 [MB] (19 MBps) [2024-12-06T22:15:08.312Z] Copying: 103/1024 [MB] (30 MBps) [2024-12-06T22:15:09.255Z] Copying: 116/1024 [MB] (13 MBps) [2024-12-06T22:15:10.200Z] Copying: 131/1024 [MB] (14 MBps) [2024-12-06T22:15:11.144Z] Copying: 142/1024 [MB] (11 MBps) [2024-12-06T22:15:12.096Z] Copying: 166/1024 [MB] (24 MBps) [2024-12-06T22:15:13.041Z] Copying: 187/1024 [MB] (20 MBps) [2024-12-06T22:15:14.430Z] Copying: 201/1024 [MB] (14 MBps) [2024-12-06T22:15:15.375Z] Copying: 217/1024 [MB] (16 MBps) [2024-12-06T22:15:16.314Z] Copying: 235/1024 [MB] (17 MBps) [2024-12-06T22:15:17.257Z] Copying: 248/1024 [MB] (13 MBps) [2024-12-06T22:15:18.197Z] Copying: 288/1024 [MB] (39 MBps) [2024-12-06T22:15:19.140Z] Copying: 317/1024 [MB] (29 MBps) [2024-12-06T22:15:20.081Z] Copying: 362/1024 [MB] (45 MBps) [2024-12-06T22:15:21.464Z] Copying: 409/1024 [MB] (46 MBps) [2024-12-06T22:15:22.034Z] Copying: 456/1024 [MB] (47 MBps) [2024-12-06T22:15:23.418Z] Copying: 494/1024 [MB] (37 MBps) [2024-12-06T22:15:24.361Z] Copying: 512/1024 [MB] (17 MBps) [2024-12-06T22:15:25.306Z] Copying: 534/1024 [MB] (21 MBps) [2024-12-06T22:15:26.246Z] Copying: 577/1024 [MB] (43 MBps) [2024-12-06T22:15:27.187Z] Copying: 595/1024 [MB] (18 MBps) [2024-12-06T22:15:28.131Z] Copying: 622/1024 [MB] (26 MBps) [2024-12-06T22:15:29.077Z] Copying: 656/1024 [MB] (33 MBps) [2024-12-06T22:15:30.464Z] Copying: 671/1024 [MB] (15 MBps) [2024-12-06T22:15:31.038Z] Copying: 695/1024 [MB] (23 MBps) [2024-12-06T22:15:32.423Z] Copying: 713/1024 [MB] (18 MBps) [2024-12-06T22:15:33.365Z] Copying: 731/1024 [MB] (18 MBps) [2024-12-06T22:15:34.309Z] Copying: 748/1024 [MB] (16 MBps) [2024-12-06T22:15:35.252Z] Copying: 764/1024 [MB] (16 MBps) [2024-12-06T22:15:36.320Z] Copying: 777/1024 [MB] (12 MBps) [2024-12-06T22:15:37.262Z] Copying: 793/1024 [MB] (16 MBps) [2024-12-06T22:15:38.202Z] Copying: 807/1024 [MB] (14 MBps) [2024-12-06T22:15:39.141Z] Copying: 819/1024 [MB] (11 MBps) [2024-12-06T22:15:40.083Z] Copying: 829/1024 [MB] (10 MBps) [2024-12-06T22:15:41.469Z] Copying: 839/1024 [MB] (10 MBps) [2024-12-06T22:15:42.042Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-06T22:15:43.429Z] Copying: 860/1024 [MB] (10 MBps) [2024-12-06T22:15:44.374Z] Copying: 870/1024 [MB] (10 MBps) [2024-12-06T22:15:45.325Z] Copying: 881/1024 [MB] (10 MBps) [2024-12-06T22:15:46.264Z] Copying: 891/1024 [MB] (10 MBps) [2024-12-06T22:15:47.204Z] Copying: 902/1024 [MB] (10 MBps) [2024-12-06T22:15:48.145Z] Copying: 924/1024 [MB] (21 MBps) [2024-12-06T22:15:49.091Z] Copying: 936/1024 [MB] (12 MBps) [2024-12-06T22:15:50.035Z] Copying: 952/1024 [MB] (15 MBps) [2024-12-06T22:15:51.425Z] Copying: 965/1024 [MB] (13 MBps) [2024-12-06T22:15:52.372Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-06T22:15:53.313Z] Copying: 989/1024 [MB] (12 MBps) [2024-12-06T22:15:54.256Z] Copying: 999/1024 [MB] (10 MBps) [2024-12-06T22:15:55.200Z] Copying: 1020/1024 [MB] (21 MBps) [2024-12-06T22:15:55.200Z] Copying: 1048548/1048576 [kB] (3268 kBps) [2024-12-06T22:15:55.200Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-06 22:15:55.085976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.086056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.328 [2024-12-06 22:15:55.086084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:22.328 [2024-12-06 22:15:55.086094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.088225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.328 [2024-12-06 22:15:55.091800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.091844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.328 [2024-12-06 22:15:55.091857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:25:22.328 [2024-12-06 22:15:55.091865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.106949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.107014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.328 [2024-12-06 22:15:55.107028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.439 ms 00:25:22.328 [2024-12-06 22:15:55.107052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.130857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.130909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.328 [2024-12-06 22:15:55.130922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.784 ms 00:25:22.328 [2024-12-06 22:15:55.130931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.137109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.137147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:22.328 [2024-12-06 22:15:55.137158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:25:22.328 [2024-12-06 22:15:55.137182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.164071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.164153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.328 [2024-12-06 22:15:55.164166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.844 ms 00:25:22.328 [2024-12-06 22:15:55.164183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.328 [2024-12-06 22:15:55.180333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.328 [2024-12-06 22:15:55.180381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:22.328 [2024-12-06 22:15:55.180396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.104 ms 00:25:22.328 [2024-12-06 22:15:55.180404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.590 [2024-12-06 22:15:55.412697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.590 [2024-12-06 22:15:55.412752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:22.590 [2024-12-06 22:15:55.412764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 232.242 ms 00:25:22.590 [2024-12-06 22:15:55.412773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.590 [2024-12-06 22:15:55.438626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.590 [2024-12-06 22:15:55.438674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:22.590 [2024-12-06 22:15:55.438687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.837 ms 00:25:22.590 [2024-12-06 22:15:55.438695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.853 [2024-12-06 22:15:55.463543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.853 [2024-12-06 22:15:55.463592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:22.853 [2024-12-06 22:15:55.463603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.803 ms 00:25:22.853 [2024-12-06 22:15:55.463610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.853 [2024-12-06 22:15:55.488300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.853 [2024-12-06 22:15:55.488348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:22.853 [2024-12-06 22:15:55.488359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.647 ms 00:25:22.853 [2024-12-06 22:15:55.488367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.853 [2024-12-06 22:15:55.512540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.853 [2024-12-06 22:15:55.512583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:22.853 [2024-12-06 22:15:55.512594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.104 ms 00:25:22.853 [2024-12-06 22:15:55.512602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.853 [2024-12-06 22:15:55.512646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:22.853 [2024-12-06 22:15:55.512662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101120 / 261120 wr_cnt: 1 state: open 00:25:22.853 [2024-12-06 22:15:55.512673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.512997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:22.853 [2024-12-06 22:15:55.513434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:22.854 [2024-12-06 22:15:55.513688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:22.854 [2024-12-06 22:15:55.513697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:25:22.854 [2024-12-06 22:15:55.513706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101120 00:25:22.854 [2024-12-06 22:15:55.513714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102080 00:25:22.854 [2024-12-06 22:15:55.513721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101120 00:25:22.854 [2024-12-06 22:15:55.513731] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:25:22.854 [2024-12-06 22:15:55.513748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:22.854 [2024-12-06 22:15:55.513757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:22.854 [2024-12-06 22:15:55.513765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:22.854 [2024-12-06 22:15:55.513772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:22.854 [2024-12-06 22:15:55.513778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:22.854 [2024-12-06 22:15:55.513786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.854 [2024-12-06 22:15:55.513794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:22.854 [2024-12-06 22:15:55.513804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:25:22.854 [2024-12-06 22:15:55.513812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.526984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.854 [2024-12-06 22:15:55.527030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:22.854 [2024-12-06 22:15:55.527049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.152 ms 00:25:22.854 [2024-12-06 22:15:55.527057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.527467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.854 [2024-12-06 22:15:55.527478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:22.854 [2024-12-06 22:15:55.527488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:25:22.854 [2024-12-06 22:15:55.527496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.563757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.563807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.854 [2024-12-06 22:15:55.563818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.563827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.563887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.563897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.854 [2024-12-06 22:15:55.563906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.563915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.563998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.564013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.854 [2024-12-06 22:15:55.564023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.564031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.564049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.564058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.854 [2024-12-06 22:15:55.564067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.564075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.646930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.646994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.854 [2024-12-06 22:15:55.647007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.647015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.714903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.714958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.854 [2024-12-06 22:15:55.714971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.714980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.715051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.854 [2024-12-06 22:15:55.715060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.715075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.715150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.854 [2024-12-06 22:15:55.715159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.715168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.715302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.854 [2024-12-06 22:15:55.715311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.715323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.715364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:22.854 [2024-12-06 22:15:55.715372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.715381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.854 [2024-12-06 22:15:55.715430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.854 [2024-12-06 22:15:55.715438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.854 [2024-12-06 22:15:55.715447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.854 [2024-12-06 22:15:55.715499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.855 [2024-12-06 22:15:55.715510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.855 [2024-12-06 22:15:55.715519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.855 [2024-12-06 22:15:55.715527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.855 [2024-12-06 22:15:55.715659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 632.693 ms, result 0 00:25:24.240 00:25:24.240 00:25:24.240 22:15:56 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:24.240 [2024-12-06 22:15:57.072224] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:25:24.240 [2024-12-06 22:15:57.072368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79498 ] 00:25:24.502 [2024-12-06 22:15:57.235126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.502 [2024-12-06 22:15:57.357115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.075 [2024-12-06 22:15:57.653537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.075 [2024-12-06 22:15:57.653620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.075 [2024-12-06 22:15:57.814388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.814452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.075 [2024-12-06 22:15:57.814467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.075 [2024-12-06 22:15:57.814476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.814529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.814543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.075 [2024-12-06 22:15:57.814552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:25.075 [2024-12-06 22:15:57.814561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.814581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.075 [2024-12-06 22:15:57.815412] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.075 [2024-12-06 22:15:57.815462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.815471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.075 [2024-12-06 22:15:57.815481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:25:25.075 [2024-12-06 22:15:57.815489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.817207] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.075 [2024-12-06 22:15:57.831465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.831512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.075 [2024-12-06 22:15:57.831525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.259 ms 00:25:25.075 [2024-12-06 22:15:57.831533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.831614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.831625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.075 [2024-12-06 22:15:57.831635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:25.075 [2024-12-06 22:15:57.831643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.839616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.839656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.075 [2024-12-06 22:15:57.839672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.896 ms 00:25:25.075 [2024-12-06 22:15:57.839681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.839757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.839766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.075 [2024-12-06 22:15:57.839775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:25.075 [2024-12-06 22:15:57.839784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.839827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.839837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.075 [2024-12-06 22:15:57.839846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:25.075 [2024-12-06 22:15:57.839858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.839883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.075 [2024-12-06 22:15:57.844022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.844063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.075 [2024-12-06 22:15:57.844073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:25:25.075 [2024-12-06 22:15:57.844081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.844132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.844142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.075 [2024-12-06 22:15:57.844152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:25.075 [2024-12-06 22:15:57.844160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.844223] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.075 [2024-12-06 22:15:57.844249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.075 [2024-12-06 22:15:57.844289] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.075 [2024-12-06 22:15:57.844306] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:25.075 [2024-12-06 22:15:57.844411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.075 [2024-12-06 22:15:57.844423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.075 [2024-12-06 22:15:57.844433] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:25.075 [2024-12-06 22:15:57.844444] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.075 [2024-12-06 22:15:57.844454] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.075 [2024-12-06 22:15:57.844463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:25.075 [2024-12-06 22:15:57.844471] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.075 [2024-12-06 22:15:57.844482] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.075 [2024-12-06 22:15:57.844490] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.075 [2024-12-06 22:15:57.844498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.844506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.075 [2024-12-06 22:15:57.844513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:25.075 [2024-12-06 22:15:57.844521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.844608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.075 [2024-12-06 22:15:57.844616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.075 [2024-12-06 22:15:57.844624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:25.075 [2024-12-06 22:15:57.844632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.075 [2024-12-06 22:15:57.844738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.075 [2024-12-06 22:15:57.844748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.075 [2024-12-06 22:15:57.844756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.075 [2024-12-06 22:15:57.844764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.075 [2024-12-06 22:15:57.844772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.075 [2024-12-06 22:15:57.844779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.075 [2024-12-06 22:15:57.844786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:25.075 [2024-12-06 22:15:57.844793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.076 [2024-12-06 22:15:57.844800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.076 [2024-12-06 22:15:57.844813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.076 [2024-12-06 22:15:57.844820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:25.076 [2024-12-06 22:15:57.844826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.076 [2024-12-06 22:15:57.844841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.076 [2024-12-06 22:15:57.844848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:25.076 [2024-12-06 22:15:57.844858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.076 [2024-12-06 22:15:57.844872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:25.076 [2024-12-06 22:15:57.844879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.076 [2024-12-06 22:15:57.844894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.076 [2024-12-06 22:15:57.844908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.076 [2024-12-06 22:15:57.844915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.076 [2024-12-06 22:15:57.844928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.076 [2024-12-06 22:15:57.844935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.076 [2024-12-06 22:15:57.844949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.076 [2024-12-06 22:15:57.844956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.076 [2024-12-06 22:15:57.844970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.076 [2024-12-06 22:15:57.844977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:25.076 [2024-12-06 22:15:57.844983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.076 [2024-12-06 22:15:57.844990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.076 [2024-12-06 22:15:57.844997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:25.076 [2024-12-06 22:15:57.845003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.076 [2024-12-06 22:15:57.845012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.076 [2024-12-06 22:15:57.845019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:25.076 [2024-12-06 22:15:57.845026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.076 [2024-12-06 22:15:57.845033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.076 [2024-12-06 22:15:57.845039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:25.076 [2024-12-06 22:15:57.845046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.076 [2024-12-06 22:15:57.845053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.076 [2024-12-06 22:15:57.845062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.076 [2024-12-06 22:15:57.845069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.076 [2024-12-06 22:15:57.845076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.076 [2024-12-06 22:15:57.845085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.076 [2024-12-06 22:15:57.845092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.076 [2024-12-06 22:15:57.845098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.076 [2024-12-06 22:15:57.845106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.076 [2024-12-06 22:15:57.845114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.076 [2024-12-06 22:15:57.845121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.076 [2024-12-06 22:15:57.845130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.076 [2024-12-06 22:15:57.845149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:25.076 [2024-12-06 22:15:57.845166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:25.076 [2024-12-06 22:15:57.845186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:25.076 [2024-12-06 22:15:57.845195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:25.076 [2024-12-06 22:15:57.845202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:25.076 [2024-12-06 22:15:57.845210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:25.076 [2024-12-06 22:15:57.845217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:25.076 [2024-12-06 22:15:57.845225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:25.076 [2024-12-06 22:15:57.845233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:25.076 [2024-12-06 22:15:57.845241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:25.076 [2024-12-06 22:15:57.845278] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.076 [2024-12-06 22:15:57.845287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.076 [2024-12-06 22:15:57.845302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.076 [2024-12-06 22:15:57.845310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.076 [2024-12-06 22:15:57.845317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.076 [2024-12-06 22:15:57.845324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.845332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.076 [2024-12-06 22:15:57.845340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:25:25.076 [2024-12-06 22:15:57.845347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.876569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.876618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.076 [2024-12-06 22:15:57.876633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.168 ms 00:25:25.076 [2024-12-06 22:15:57.876641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.876731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.876739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.076 [2024-12-06 22:15:57.876748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:25.076 [2024-12-06 22:15:57.876759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.922671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.922726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.076 [2024-12-06 22:15:57.922740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.852 ms 00:25:25.076 [2024-12-06 22:15:57.922748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.922798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.922812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.076 [2024-12-06 22:15:57.922822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.076 [2024-12-06 22:15:57.922830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.923467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.923491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.076 [2024-12-06 22:15:57.923501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:25:25.076 [2024-12-06 22:15:57.923509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.923668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.923684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.076 [2024-12-06 22:15:57.923693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:25:25.076 [2024-12-06 22:15:57.923701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.076 [2024-12-06 22:15:57.939087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.076 [2024-12-06 22:15:57.939135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.076 [2024-12-06 22:15:57.939146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.365 ms 00:25:25.076 [2024-12-06 22:15:57.939154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:57.953355] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:25.339 [2024-12-06 22:15:57.953405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.339 [2024-12-06 22:15:57.953419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:57.953428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.339 [2024-12-06 22:15:57.953438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.131 ms 00:25:25.339 [2024-12-06 22:15:57.953446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:57.979304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:57.979367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.339 [2024-12-06 22:15:57.979380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.805 ms 00:25:25.339 [2024-12-06 22:15:57.979388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:57.992264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:57.992310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.339 [2024-12-06 22:15:57.992322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.822 ms 00:25:25.339 [2024-12-06 22:15:57.992329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.005087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.005132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:25.339 [2024-12-06 22:15:58.005144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.711 ms 00:25:25.339 [2024-12-06 22:15:58.005151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.005796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.005830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:25.339 [2024-12-06 22:15:58.005841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:25:25.339 [2024-12-06 22:15:58.005849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.069902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.069972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:25.339 [2024-12-06 22:15:58.069988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.033 ms 00:25:25.339 [2024-12-06 22:15:58.069997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.080967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:25.339 [2024-12-06 22:15:58.084131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.084186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.339 [2024-12-06 22:15:58.084198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.075 ms 00:25:25.339 [2024-12-06 22:15:58.084207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.084295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.084308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.339 [2024-12-06 22:15:58.084321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:25.339 [2024-12-06 22:15:58.084329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.085998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.086046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.339 [2024-12-06 22:15:58.086057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:25:25.339 [2024-12-06 22:15:58.086065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.086095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.086104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.339 [2024-12-06 22:15:58.086113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:25.339 [2024-12-06 22:15:58.086127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.086165] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.339 [2024-12-06 22:15:58.086202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.086212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.339 [2024-12-06 22:15:58.086221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:25.339 [2024-12-06 22:15:58.086229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.111444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.111492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:25.339 [2024-12-06 22:15:58.111512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.195 ms 00:25:25.339 [2024-12-06 22:15:58.111520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.111612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.339 [2024-12-06 22:15:58.111622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:25.339 [2024-12-06 22:15:58.111632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:25.339 [2024-12-06 22:15:58.111640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.339 [2024-12-06 22:15:58.113267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.371 ms, result 0 00:25:26.726  [2024-12-06T22:16:00.539Z] Copying: 8088/1048576 [kB] (8088 kBps) [2024-12-06T22:16:01.483Z] Copying: 24/1024 [MB] (16 MBps) [2024-12-06T22:16:02.424Z] Copying: 35/1024 [MB] (11 MBps) [2024-12-06T22:16:03.368Z] Copying: 53/1024 [MB] (17 MBps) [2024-12-06T22:16:04.313Z] Copying: 74/1024 [MB] (21 MBps) [2024-12-06T22:16:05.711Z] Copying: 88/1024 [MB] (14 MBps) [2024-12-06T22:16:06.676Z] Copying: 100/1024 [MB] (11 MBps) [2024-12-06T22:16:07.337Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-06T22:16:08.723Z] Copying: 130/1024 [MB] (19 MBps) [2024-12-06T22:16:09.666Z] Copying: 140/1024 [MB] (10 MBps) [2024-12-06T22:16:10.609Z] Copying: 153/1024 [MB] (12 MBps) [2024-12-06T22:16:11.555Z] Copying: 180/1024 [MB] (27 MBps) [2024-12-06T22:16:12.501Z] Copying: 197/1024 [MB] (16 MBps) [2024-12-06T22:16:13.447Z] Copying: 210/1024 [MB] (13 MBps) [2024-12-06T22:16:14.391Z] Copying: 233/1024 [MB] (22 MBps) [2024-12-06T22:16:15.333Z] Copying: 252/1024 [MB] (19 MBps) [2024-12-06T22:16:16.717Z] Copying: 264/1024 [MB] (11 MBps) [2024-12-06T22:16:17.660Z] Copying: 285/1024 [MB] (21 MBps) [2024-12-06T22:16:18.603Z] Copying: 305/1024 [MB] (20 MBps) [2024-12-06T22:16:19.547Z] Copying: 326/1024 [MB] (20 MBps) [2024-12-06T22:16:20.493Z] Copying: 344/1024 [MB] (17 MBps) [2024-12-06T22:16:21.437Z] Copying: 360/1024 [MB] (16 MBps) [2024-12-06T22:16:22.379Z] Copying: 378/1024 [MB] (18 MBps) [2024-12-06T22:16:23.320Z] Copying: 399/1024 [MB] (20 MBps) [2024-12-06T22:16:24.702Z] Copying: 419/1024 [MB] (19 MBps) [2024-12-06T22:16:25.642Z] Copying: 438/1024 [MB] (19 MBps) [2024-12-06T22:16:26.584Z] Copying: 469/1024 [MB] (30 MBps) [2024-12-06T22:16:27.527Z] Copying: 485/1024 [MB] (16 MBps) [2024-12-06T22:16:28.468Z] Copying: 507/1024 [MB] (21 MBps) [2024-12-06T22:16:29.410Z] Copying: 524/1024 [MB] (16 MBps) [2024-12-06T22:16:30.353Z] Copying: 543/1024 [MB] (19 MBps) [2024-12-06T22:16:31.359Z] Copying: 568/1024 [MB] (25 MBps) [2024-12-06T22:16:32.746Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-06T22:16:33.317Z] Copying: 593/1024 [MB] (12 MBps) [2024-12-06T22:16:34.707Z] Copying: 608/1024 [MB] (15 MBps) [2024-12-06T22:16:35.655Z] Copying: 625/1024 [MB] (16 MBps) [2024-12-06T22:16:36.599Z] Copying: 642/1024 [MB] (17 MBps) [2024-12-06T22:16:37.545Z] Copying: 663/1024 [MB] (20 MBps) [2024-12-06T22:16:38.492Z] Copying: 681/1024 [MB] (18 MBps) [2024-12-06T22:16:39.440Z] Copying: 694/1024 [MB] (12 MBps) [2024-12-06T22:16:40.387Z] Copying: 704/1024 [MB] (10 MBps) [2024-12-06T22:16:41.334Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-06T22:16:42.727Z] Copying: 732/1024 [MB] (16 MBps) [2024-12-06T22:16:43.675Z] Copying: 746/1024 [MB] (14 MBps) [2024-12-06T22:16:44.622Z] Copying: 757/1024 [MB] (10 MBps) [2024-12-06T22:16:45.569Z] Copying: 771/1024 [MB] (13 MBps) [2024-12-06T22:16:46.516Z] Copying: 782/1024 [MB] (11 MBps) [2024-12-06T22:16:47.464Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-06T22:16:48.412Z] Copying: 809/1024 [MB] (15 MBps) [2024-12-06T22:16:49.360Z] Copying: 824/1024 [MB] (15 MBps) [2024-12-06T22:16:50.308Z] Copying: 838/1024 [MB] (14 MBps) [2024-12-06T22:16:51.695Z] Copying: 855/1024 [MB] (16 MBps) [2024-12-06T22:16:52.641Z] Copying: 877/1024 [MB] (21 MBps) [2024-12-06T22:16:53.589Z] Copying: 898/1024 [MB] (21 MBps) [2024-12-06T22:16:54.541Z] Copying: 922/1024 [MB] (23 MBps) [2024-12-06T22:16:55.485Z] Copying: 940/1024 [MB] (18 MBps) [2024-12-06T22:16:56.428Z] Copying: 961/1024 [MB] (20 MBps) [2024-12-06T22:16:57.372Z] Copying: 976/1024 [MB] (15 MBps) [2024-12-06T22:16:58.319Z] Copying: 996/1024 [MB] (19 MBps) [2024-12-06T22:16:58.893Z] Copying: 1016/1024 [MB] (20 MBps) [2024-12-06T22:16:59.467Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 22:16:59.341310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.341396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:26.595 [2024-12-06 22:16:59.341429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:26.595 [2024-12-06 22:16:59.341438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.341463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:26.595 [2024-12-06 22:16:59.344859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.344907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:26.595 [2024-12-06 22:16:59.344919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:26:26.595 [2024-12-06 22:16:59.344929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.345185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.345198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:26.595 [2024-12-06 22:16:59.345214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:26:26.595 [2024-12-06 22:16:59.345223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.351662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.351712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:26.595 [2024-12-06 22:16:59.351723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.421 ms 00:26:26.595 [2024-12-06 22:16:59.351732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.358440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.358489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:26.595 [2024-12-06 22:16:59.358501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:26:26.595 [2024-12-06 22:16:59.358517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.385755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.385804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:26.595 [2024-12-06 22:16:59.385818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.163 ms 00:26:26.595 [2024-12-06 22:16:59.385827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.595 [2024-12-06 22:16:59.401707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.595 [2024-12-06 22:16:59.401756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:26.595 [2024-12-06 22:16:59.401769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:26:26.595 [2024-12-06 22:16:59.401778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.857 [2024-12-06 22:16:59.666426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.857 [2024-12-06 22:16:59.666498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:26.857 [2024-12-06 22:16:59.666512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 264.594 ms 00:26:26.857 [2024-12-06 22:16:59.666522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.857 [2024-12-06 22:16:59.692880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.857 [2024-12-06 22:16:59.692927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:26.857 [2024-12-06 22:16:59.692939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.333 ms 00:26:26.857 [2024-12-06 22:16:59.692946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.857 [2024-12-06 22:16:59.718620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.857 [2024-12-06 22:16:59.718668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:26.857 [2024-12-06 22:16:59.718679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.627 ms 00:26:26.857 [2024-12-06 22:16:59.718687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.120 [2024-12-06 22:16:59.744034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.120 [2024-12-06 22:16:59.744080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:27.120 [2024-12-06 22:16:59.744092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.302 ms 00:26:27.120 [2024-12-06 22:16:59.744109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.120 [2024-12-06 22:16:59.769161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.120 [2024-12-06 22:16:59.769217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:27.120 [2024-12-06 22:16:59.769228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.928 ms 00:26:27.120 [2024-12-06 22:16:59.769236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.120 [2024-12-06 22:16:59.769282] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:27.120 [2024-12-06 22:16:59.769299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:27.120 [2024-12-06 22:16:59.769311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:27.120 [2024-12-06 22:16:59.769636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.769993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:27.121 [2024-12-06 22:16:59.770102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:27.121 [2024-12-06 22:16:59.770110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f39f8ea0-7cbe-473f-8a3b-f17ae415665c 00:26:27.121 [2024-12-06 22:16:59.770118] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:27.121 [2024-12-06 22:16:59.770127] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30912 00:26:27.121 [2024-12-06 22:16:59.770135] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29952 00:26:27.121 [2024-12-06 22:16:59.770150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0321 00:26:27.121 [2024-12-06 22:16:59.770158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:27.121 [2024-12-06 22:16:59.770186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:27.121 [2024-12-06 22:16:59.770196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:27.121 [2024-12-06 22:16:59.770203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:27.121 [2024-12-06 22:16:59.770210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:27.121 [2024-12-06 22:16:59.770218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.121 [2024-12-06 22:16:59.770227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:27.121 [2024-12-06 22:16:59.770236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:26:27.121 [2024-12-06 22:16:59.770244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.121 [2024-12-06 22:16:59.783879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.121 [2024-12-06 22:16:59.783928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:27.121 [2024-12-06 22:16:59.783940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.593 ms 00:26:27.121 [2024-12-06 22:16:59.783948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.121 [2024-12-06 22:16:59.784380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.121 [2024-12-06 22:16:59.784400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:27.121 [2024-12-06 22:16:59.784410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:26:27.121 [2024-12-06 22:16:59.784426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.821050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.821101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.122 [2024-12-06 22:16:59.821114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.821124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.821212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.821223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.122 [2024-12-06 22:16:59.821233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.821243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.821318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.821335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.122 [2024-12-06 22:16:59.821345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.821354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.821372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.821381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.122 [2024-12-06 22:16:59.821391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.821400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.907014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.907073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.122 [2024-12-06 22:16:59.907087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.907095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.976518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.976574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.122 [2024-12-06 22:16:59.976586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.976594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.976674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.976684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.122 [2024-12-06 22:16:59.976700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.976708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.976750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.976760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.122 [2024-12-06 22:16:59.976769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.976777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.976885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.976898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.122 [2024-12-06 22:16:59.976907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.976918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.976954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.976963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:27.122 [2024-12-06 22:16:59.976972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.976981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.977023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.977032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.122 [2024-12-06 22:16:59.977041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.977052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.977098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:27.122 [2024-12-06 22:16:59.977108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.122 [2024-12-06 22:16:59.977116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:27.122 [2024-12-06 22:16:59.977124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.122 [2024-12-06 22:16:59.977281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 635.934 ms, result 0 00:26:28.064 00:26:28.064 00:26:28.064 22:17:00 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:30.612 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:30.612 22:17:02 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:30.612 22:17:02 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:30.612 22:17:02 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77355 00:26:30.612 22:17:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77355 ']' 00:26:30.612 22:17:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77355 00:26:30.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77355) - No such process 00:26:30.612 Process with pid 77355 is not found 00:26:30.612 Remove shared memory files 00:26:30.612 22:17:03 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77355 is not found' 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:30.612 22:17:03 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:30.612 00:26:30.612 real 4m31.661s 00:26:30.612 user 4m19.059s 00:26:30.612 sys 0m12.069s 00:26:30.612 ************************************ 00:26:30.612 END TEST ftl_restore 00:26:30.612 ************************************ 00:26:30.612 22:17:03 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.612 22:17:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:30.612 22:17:03 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:30.612 22:17:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:30.612 22:17:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.612 22:17:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:30.612 ************************************ 00:26:30.612 START TEST ftl_dirty_shutdown 00:26:30.612 ************************************ 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:30.612 * Looking for test storage... 00:26:30.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:30.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.612 --rc genhtml_branch_coverage=1 00:26:30.612 --rc genhtml_function_coverage=1 00:26:30.612 --rc genhtml_legend=1 00:26:30.612 --rc geninfo_all_blocks=1 00:26:30.612 --rc geninfo_unexecuted_blocks=1 00:26:30.612 00:26:30.612 ' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:30.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.612 --rc genhtml_branch_coverage=1 00:26:30.612 --rc genhtml_function_coverage=1 00:26:30.612 --rc genhtml_legend=1 00:26:30.612 --rc geninfo_all_blocks=1 00:26:30.612 --rc geninfo_unexecuted_blocks=1 00:26:30.612 00:26:30.612 ' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:30.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.612 --rc genhtml_branch_coverage=1 00:26:30.612 --rc genhtml_function_coverage=1 00:26:30.612 --rc genhtml_legend=1 00:26:30.612 --rc geninfo_all_blocks=1 00:26:30.612 --rc geninfo_unexecuted_blocks=1 00:26:30.612 00:26:30.612 ' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:30.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.612 --rc genhtml_branch_coverage=1 00:26:30.612 --rc genhtml_function_coverage=1 00:26:30.612 --rc genhtml_legend=1 00:26:30.612 --rc geninfo_all_blocks=1 00:26:30.612 --rc geninfo_unexecuted_blocks=1 00:26:30.612 00:26:30.612 ' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:30.612 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80235 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80235 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80235 ']' 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.613 22:17:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:30.613 [2024-12-06 22:17:03.351776] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:26:30.613 [2024-12-06 22:17:03.351884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80235 ] 00:26:30.874 [2024-12-06 22:17:03.510977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.874 [2024-12-06 22:17:03.606485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.446 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.446 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:31.446 22:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:31.446 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:31.447 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:31.447 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:31.447 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:31.447 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:31.707 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:31.969 { 00:26:31.969 "name": "nvme0n1", 00:26:31.969 "aliases": [ 00:26:31.969 "95da074d-13da-47e8-b236-be0e3ca17164" 00:26:31.969 ], 00:26:31.969 "product_name": "NVMe disk", 00:26:31.969 "block_size": 4096, 00:26:31.969 "num_blocks": 1310720, 00:26:31.969 "uuid": "95da074d-13da-47e8-b236-be0e3ca17164", 00:26:31.969 "numa_id": -1, 00:26:31.969 "assigned_rate_limits": { 00:26:31.969 "rw_ios_per_sec": 0, 00:26:31.969 "rw_mbytes_per_sec": 0, 00:26:31.969 "r_mbytes_per_sec": 0, 00:26:31.969 "w_mbytes_per_sec": 0 00:26:31.969 }, 00:26:31.969 "claimed": true, 00:26:31.969 "claim_type": "read_many_write_one", 00:26:31.969 "zoned": false, 00:26:31.969 "supported_io_types": { 00:26:31.969 "read": true, 00:26:31.969 "write": true, 00:26:31.969 "unmap": true, 00:26:31.969 "flush": true, 00:26:31.969 "reset": true, 00:26:31.969 "nvme_admin": true, 00:26:31.969 "nvme_io": true, 00:26:31.969 "nvme_io_md": false, 00:26:31.969 "write_zeroes": true, 00:26:31.969 "zcopy": false, 00:26:31.969 "get_zone_info": false, 00:26:31.969 "zone_management": false, 00:26:31.969 "zone_append": false, 00:26:31.969 "compare": true, 00:26:31.969 "compare_and_write": false, 00:26:31.969 "abort": true, 00:26:31.969 "seek_hole": false, 00:26:31.969 "seek_data": false, 00:26:31.969 "copy": true, 00:26:31.969 "nvme_iov_md": false 00:26:31.969 }, 00:26:31.969 "driver_specific": { 00:26:31.969 "nvme": [ 00:26:31.969 { 00:26:31.969 "pci_address": "0000:00:11.0", 00:26:31.969 "trid": { 00:26:31.969 "trtype": "PCIe", 00:26:31.969 "traddr": "0000:00:11.0" 00:26:31.969 }, 00:26:31.969 "ctrlr_data": { 00:26:31.969 "cntlid": 0, 00:26:31.969 "vendor_id": "0x1b36", 00:26:31.969 "model_number": "QEMU NVMe Ctrl", 00:26:31.969 "serial_number": "12341", 00:26:31.969 "firmware_revision": "8.0.0", 00:26:31.969 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:31.969 "oacs": { 00:26:31.969 "security": 0, 00:26:31.969 "format": 1, 00:26:31.969 "firmware": 0, 00:26:31.969 "ns_manage": 1 00:26:31.969 }, 00:26:31.969 "multi_ctrlr": false, 00:26:31.969 "ana_reporting": false 00:26:31.969 }, 00:26:31.969 "vs": { 00:26:31.969 "nvme_version": "1.4" 00:26:31.969 }, 00:26:31.969 "ns_data": { 00:26:31.969 "id": 1, 00:26:31.969 "can_share": false 00:26:31.969 } 00:26:31.969 } 00:26:31.969 ], 00:26:31.969 "mp_policy": "active_passive" 00:26:31.969 } 00:26:31.969 } 00:26:31.969 ]' 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:31.969 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:32.231 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b3ee7475-0bad-414a-8cf4-b183f8643b8d 00:26:32.231 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:32.231 22:17:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3ee7475-0bad-414a-8cf4-b183f8643b8d 00:26:32.231 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:32.492 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=afa271ad-9684-4714-bedc-3d5684137a4e 00:26:32.492 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u afa271ad-9684-4714-bedc-3d5684137a4e 00:26:32.753 22:17:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:32.754 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:33.015 { 00:26:33.015 "name": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:33.015 "aliases": [ 00:26:33.015 "lvs/nvme0n1p0" 00:26:33.015 ], 00:26:33.015 "product_name": "Logical Volume", 00:26:33.015 "block_size": 4096, 00:26:33.015 "num_blocks": 26476544, 00:26:33.015 "uuid": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:33.015 "assigned_rate_limits": { 00:26:33.015 "rw_ios_per_sec": 0, 00:26:33.015 "rw_mbytes_per_sec": 0, 00:26:33.015 "r_mbytes_per_sec": 0, 00:26:33.015 "w_mbytes_per_sec": 0 00:26:33.015 }, 00:26:33.015 "claimed": false, 00:26:33.015 "zoned": false, 00:26:33.015 "supported_io_types": { 00:26:33.015 "read": true, 00:26:33.015 "write": true, 00:26:33.015 "unmap": true, 00:26:33.015 "flush": false, 00:26:33.015 "reset": true, 00:26:33.015 "nvme_admin": false, 00:26:33.015 "nvme_io": false, 00:26:33.015 "nvme_io_md": false, 00:26:33.015 "write_zeroes": true, 00:26:33.015 "zcopy": false, 00:26:33.015 "get_zone_info": false, 00:26:33.015 "zone_management": false, 00:26:33.015 "zone_append": false, 00:26:33.015 "compare": false, 00:26:33.015 "compare_and_write": false, 00:26:33.015 "abort": false, 00:26:33.015 "seek_hole": true, 00:26:33.015 "seek_data": true, 00:26:33.015 "copy": false, 00:26:33.015 "nvme_iov_md": false 00:26:33.015 }, 00:26:33.015 "driver_specific": { 00:26:33.015 "lvol": { 00:26:33.015 "lvol_store_uuid": "afa271ad-9684-4714-bedc-3d5684137a4e", 00:26:33.015 "base_bdev": "nvme0n1", 00:26:33.015 "thin_provision": true, 00:26:33.015 "num_allocated_clusters": 0, 00:26:33.015 "snapshot": false, 00:26:33.015 "clone": false, 00:26:33.015 "esnap_clone": false 00:26:33.015 } 00:26:33.015 } 00:26:33.015 } 00:26:33.015 ]' 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:33.015 22:17:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:33.016 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:33.016 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:33.016 22:17:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:33.277 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.538 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:33.538 { 00:26:33.538 "name": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:33.538 "aliases": [ 00:26:33.538 "lvs/nvme0n1p0" 00:26:33.538 ], 00:26:33.538 "product_name": "Logical Volume", 00:26:33.538 "block_size": 4096, 00:26:33.538 "num_blocks": 26476544, 00:26:33.538 "uuid": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:33.538 "assigned_rate_limits": { 00:26:33.538 "rw_ios_per_sec": 0, 00:26:33.538 "rw_mbytes_per_sec": 0, 00:26:33.538 "r_mbytes_per_sec": 0, 00:26:33.538 "w_mbytes_per_sec": 0 00:26:33.538 }, 00:26:33.538 "claimed": false, 00:26:33.538 "zoned": false, 00:26:33.538 "supported_io_types": { 00:26:33.538 "read": true, 00:26:33.538 "write": true, 00:26:33.538 "unmap": true, 00:26:33.538 "flush": false, 00:26:33.538 "reset": true, 00:26:33.538 "nvme_admin": false, 00:26:33.538 "nvme_io": false, 00:26:33.538 "nvme_io_md": false, 00:26:33.538 "write_zeroes": true, 00:26:33.538 "zcopy": false, 00:26:33.538 "get_zone_info": false, 00:26:33.538 "zone_management": false, 00:26:33.538 "zone_append": false, 00:26:33.538 "compare": false, 00:26:33.538 "compare_and_write": false, 00:26:33.538 "abort": false, 00:26:33.538 "seek_hole": true, 00:26:33.538 "seek_data": true, 00:26:33.538 "copy": false, 00:26:33.538 "nvme_iov_md": false 00:26:33.538 }, 00:26:33.538 "driver_specific": { 00:26:33.538 "lvol": { 00:26:33.538 "lvol_store_uuid": "afa271ad-9684-4714-bedc-3d5684137a4e", 00:26:33.538 "base_bdev": "nvme0n1", 00:26:33.538 "thin_provision": true, 00:26:33.538 "num_allocated_clusters": 0, 00:26:33.538 "snapshot": false, 00:26:33.538 "clone": false, 00:26:33.538 "esnap_clone": false 00:26:33.538 } 00:26:33.538 } 00:26:33.538 } 00:26:33.538 ]' 00:26:33.538 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:33.538 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:33.539 22:17:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:33.800 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16cefefe-0941-4ff0-8e94-88dfea5aebd3 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:34.139 { 00:26:34.139 "name": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:34.139 "aliases": [ 00:26:34.139 "lvs/nvme0n1p0" 00:26:34.139 ], 00:26:34.139 "product_name": "Logical Volume", 00:26:34.139 "block_size": 4096, 00:26:34.139 "num_blocks": 26476544, 00:26:34.139 "uuid": "16cefefe-0941-4ff0-8e94-88dfea5aebd3", 00:26:34.139 "assigned_rate_limits": { 00:26:34.139 "rw_ios_per_sec": 0, 00:26:34.139 "rw_mbytes_per_sec": 0, 00:26:34.139 "r_mbytes_per_sec": 0, 00:26:34.139 "w_mbytes_per_sec": 0 00:26:34.139 }, 00:26:34.139 "claimed": false, 00:26:34.139 "zoned": false, 00:26:34.139 "supported_io_types": { 00:26:34.139 "read": true, 00:26:34.139 "write": true, 00:26:34.139 "unmap": true, 00:26:34.139 "flush": false, 00:26:34.139 "reset": true, 00:26:34.139 "nvme_admin": false, 00:26:34.139 "nvme_io": false, 00:26:34.139 "nvme_io_md": false, 00:26:34.139 "write_zeroes": true, 00:26:34.139 "zcopy": false, 00:26:34.139 "get_zone_info": false, 00:26:34.139 "zone_management": false, 00:26:34.139 "zone_append": false, 00:26:34.139 "compare": false, 00:26:34.139 "compare_and_write": false, 00:26:34.139 "abort": false, 00:26:34.139 "seek_hole": true, 00:26:34.139 "seek_data": true, 00:26:34.139 "copy": false, 00:26:34.139 "nvme_iov_md": false 00:26:34.139 }, 00:26:34.139 "driver_specific": { 00:26:34.139 "lvol": { 00:26:34.139 "lvol_store_uuid": "afa271ad-9684-4714-bedc-3d5684137a4e", 00:26:34.139 "base_bdev": "nvme0n1", 00:26:34.139 "thin_provision": true, 00:26:34.139 "num_allocated_clusters": 0, 00:26:34.139 "snapshot": false, 00:26:34.139 "clone": false, 00:26:34.139 "esnap_clone": false 00:26:34.139 } 00:26:34.139 } 00:26:34.139 } 00:26:34.139 ]' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 16cefefe-0941-4ff0-8e94-88dfea5aebd3 --l2p_dram_limit 10' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:34.139 22:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 16cefefe-0941-4ff0-8e94-88dfea5aebd3 --l2p_dram_limit 10 -c nvc0n1p0 00:26:34.139 [2024-12-06 22:17:06.956287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.956410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:34.139 [2024-12-06 22:17:06.956430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:34.139 [2024-12-06 22:17:06.956437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.956490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.956498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.139 [2024-12-06 22:17:06.956506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:34.139 [2024-12-06 22:17:06.956512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.956533] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:34.139 [2024-12-06 22:17:06.957087] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:34.139 [2024-12-06 22:17:06.957107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.957113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.139 [2024-12-06 22:17:06.957121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:26:34.139 [2024-12-06 22:17:06.957127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.957435] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7c260cfd-9d08-4b71-afca-35499d570822 00:26:34.139 [2024-12-06 22:17:06.958429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.958460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:34.139 [2024-12-06 22:17:06.958469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:34.139 [2024-12-06 22:17:06.958477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.963348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.963455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.139 [2024-12-06 22:17:06.963468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.836 ms 00:26:34.139 [2024-12-06 22:17:06.963475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.963542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.963551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.139 [2024-12-06 22:17:06.963557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:34.139 [2024-12-06 22:17:06.963567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.963611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.963621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:34.139 [2024-12-06 22:17:06.963629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:34.139 [2024-12-06 22:17:06.963635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.963653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:34.139 [2024-12-06 22:17:06.966520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.966606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.139 [2024-12-06 22:17:06.966621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:26:34.139 [2024-12-06 22:17:06.966627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.966657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.966663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:34.139 [2024-12-06 22:17:06.966671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:34.139 [2024-12-06 22:17:06.966677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.966690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:34.139 [2024-12-06 22:17:06.966799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:34.139 [2024-12-06 22:17:06.966811] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:34.139 [2024-12-06 22:17:06.966819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:34.139 [2024-12-06 22:17:06.966828] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:34.139 [2024-12-06 22:17:06.966835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:34.139 [2024-12-06 22:17:06.966843] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:34.139 [2024-12-06 22:17:06.966848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:34.139 [2024-12-06 22:17:06.966857] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:34.139 [2024-12-06 22:17:06.966863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:34.139 [2024-12-06 22:17:06.966870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.966880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:34.139 [2024-12-06 22:17:06.966887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:26:34.139 [2024-12-06 22:17:06.966892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.966958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.139 [2024-12-06 22:17:06.966964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:34.139 [2024-12-06 22:17:06.966971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:34.139 [2024-12-06 22:17:06.966977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.139 [2024-12-06 22:17:06.967055] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:34.139 [2024-12-06 22:17:06.967062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:34.139 [2024-12-06 22:17:06.967070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:34.139 [2024-12-06 22:17:06.967076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.139 [2024-12-06 22:17:06.967083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:34.139 [2024-12-06 22:17:06.967088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:34.139 [2024-12-06 22:17:06.967095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:34.139 [2024-12-06 22:17:06.967100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:34.139 [2024-12-06 22:17:06.967108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:34.139 [2024-12-06 22:17:06.967112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:34.139 [2024-12-06 22:17:06.967118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:34.139 [2024-12-06 22:17:06.967123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:34.139 [2024-12-06 22:17:06.967130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:34.139 [2024-12-06 22:17:06.967135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:34.139 [2024-12-06 22:17:06.967142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:34.139 [2024-12-06 22:17:06.967147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.139 [2024-12-06 22:17:06.967155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:34.139 [2024-12-06 22:17:06.967160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:34.139 [2024-12-06 22:17:06.967166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.139 [2024-12-06 22:17:06.967185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:34.140 [2024-12-06 22:17:06.967192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:34.140 [2024-12-06 22:17:06.967209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:34.140 [2024-12-06 22:17:06.967228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:34.140 [2024-12-06 22:17:06.967245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:34.140 [2024-12-06 22:17:06.967264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:34.140 [2024-12-06 22:17:06.967278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:34.140 [2024-12-06 22:17:06.967282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:34.140 [2024-12-06 22:17:06.967289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:34.140 [2024-12-06 22:17:06.967294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:34.140 [2024-12-06 22:17:06.967300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:34.140 [2024-12-06 22:17:06.967305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:34.140 [2024-12-06 22:17:06.967316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:34.140 [2024-12-06 22:17:06.967322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:34.140 [2024-12-06 22:17:06.967334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:34.140 [2024-12-06 22:17:06.967340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:34.140 [2024-12-06 22:17:06.967352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:34.140 [2024-12-06 22:17:06.967360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:34.140 [2024-12-06 22:17:06.967366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:34.140 [2024-12-06 22:17:06.967372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:34.140 [2024-12-06 22:17:06.967376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:34.140 [2024-12-06 22:17:06.967382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:34.140 [2024-12-06 22:17:06.967388] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:34.140 [2024-12-06 22:17:06.967398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:34.140 [2024-12-06 22:17:06.967415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:34.140 [2024-12-06 22:17:06.967420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:34.140 [2024-12-06 22:17:06.967427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:34.140 [2024-12-06 22:17:06.967432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:34.140 [2024-12-06 22:17:06.967440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:34.140 [2024-12-06 22:17:06.967445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:34.140 [2024-12-06 22:17:06.967451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:34.140 [2024-12-06 22:17:06.967457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:34.140 [2024-12-06 22:17:06.967465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:34.140 [2024-12-06 22:17:06.967494] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:34.140 [2024-12-06 22:17:06.967501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:34.140 [2024-12-06 22:17:06.967514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:34.140 [2024-12-06 22:17:06.967519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:34.140 [2024-12-06 22:17:06.967526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:34.140 [2024-12-06 22:17:06.967532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.140 [2024-12-06 22:17:06.967539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:34.140 [2024-12-06 22:17:06.967545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:26:34.140 [2024-12-06 22:17:06.967551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.140 [2024-12-06 22:17:06.967595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:34.140 [2024-12-06 22:17:06.967607] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:37.464 [2024-12-06 22:17:09.887599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.464 [2024-12-06 22:17:09.887661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:37.464 [2024-12-06 22:17:09.887676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2919.991 ms 00:26:37.464 [2024-12-06 22:17:09.887686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.464 [2024-12-06 22:17:09.914520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.464 [2024-12-06 22:17:09.914740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.464 [2024-12-06 22:17:09.914758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.623 ms 00:26:37.464 [2024-12-06 22:17:09.914768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.464 [2024-12-06 22:17:09.914879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.464 [2024-12-06 22:17:09.914891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:37.464 [2024-12-06 22:17:09.914900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:37.465 [2024-12-06 22:17:09.914916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.946216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.946252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.465 [2024-12-06 22:17:09.946263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.248 ms 00:26:37.465 [2024-12-06 22:17:09.946275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.946303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.946315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.465 [2024-12-06 22:17:09.946324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:37.465 [2024-12-06 22:17:09.946340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.946728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.946753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.465 [2024-12-06 22:17:09.946762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:26:37.465 [2024-12-06 22:17:09.946773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.946876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.946887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.465 [2024-12-06 22:17:09.946897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:37.465 [2024-12-06 22:17:09.946908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.961603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.961744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.465 [2024-12-06 22:17:09.961761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.678 ms 00:26:37.465 [2024-12-06 22:17:09.961771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:09.987911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:37.465 [2024-12-06 22:17:09.991448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:09.991489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:37.465 [2024-12-06 22:17:09.991508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.600 ms 00:26:37.465 [2024-12-06 22:17:09.991520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.072041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.072095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:37.465 [2024-12-06 22:17:10.072119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.471 ms 00:26:37.465 [2024-12-06 22:17:10.072128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.072333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.072350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:37.465 [2024-12-06 22:17:10.072363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:26:37.465 [2024-12-06 22:17:10.072372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.096777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.096818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:37.465 [2024-12-06 22:17:10.096833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.354 ms 00:26:37.465 [2024-12-06 22:17:10.096841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.120198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.120233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:37.465 [2024-12-06 22:17:10.120248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.309 ms 00:26:37.465 [2024-12-06 22:17:10.120256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.120823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.120841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.465 [2024-12-06 22:17:10.120852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:26:37.465 [2024-12-06 22:17:10.120861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.197383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.197595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:37.465 [2024-12-06 22:17:10.197624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.467 ms 00:26:37.465 [2024-12-06 22:17:10.197633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.224400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.224596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:37.465 [2024-12-06 22:17:10.224624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.419 ms 00:26:37.465 [2024-12-06 22:17:10.224633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.250415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.250465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:37.465 [2024-12-06 22:17:10.250481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:26:37.465 [2024-12-06 22:17:10.250490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.276770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.276818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:37.465 [2024-12-06 22:17:10.276833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:26:37.465 [2024-12-06 22:17:10.276842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.276897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.276906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:37.465 [2024-12-06 22:17:10.276921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:37.465 [2024-12-06 22:17:10.276930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.277023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.465 [2024-12-06 22:17:10.277038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:37.465 [2024-12-06 22:17:10.277049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:37.465 [2024-12-06 22:17:10.277056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.465 [2024-12-06 22:17:10.278273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3321.466 ms, result 0 00:26:37.465 { 00:26:37.465 "name": "ftl0", 00:26:37.465 "uuid": "7c260cfd-9d08-4b71-afca-35499d570822" 00:26:37.465 } 00:26:37.465 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:37.465 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:37.728 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:37.728 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:37.728 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:37.990 /dev/nbd0 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:37.990 1+0 records in 00:26:37.990 1+0 records out 00:26:37.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650449 s, 6.3 MB/s 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:37.990 22:17:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:37.990 [2024-12-06 22:17:10.850558] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:26:37.990 [2024-12-06 22:17:10.850695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80372 ] 00:26:38.253 [2024-12-06 22:17:11.015319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.515 [2024-12-06 22:17:11.139511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.905  [2024-12-06T22:17:13.722Z] Copying: 188/1024 [MB] (188 MBps) [2024-12-06T22:17:14.666Z] Copying: 378/1024 [MB] (189 MBps) [2024-12-06T22:17:15.628Z] Copying: 635/1024 [MB] (256 MBps) [2024-12-06T22:17:16.198Z] Copying: 894/1024 [MB] (259 MBps) [2024-12-06T22:17:16.770Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:26:43.898 00:26:43.898 22:17:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:45.811 22:17:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:45.811 [2024-12-06 22:17:18.667632] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:26:45.811 [2024-12-06 22:17:18.667719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80458 ] 00:26:46.072 [2024-12-06 22:17:18.822212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.072 [2024-12-06 22:17:18.916290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.457  [2024-12-06T22:17:21.272Z] Copying: 33/1024 [MB] (33 MBps) [2024-12-06T22:17:22.214Z] Copying: 63/1024 [MB] (30 MBps) [2024-12-06T22:17:23.158Z] Copying: 86/1024 [MB] (22 MBps) [2024-12-06T22:17:24.545Z] Copying: 105/1024 [MB] (19 MBps) [2024-12-06T22:17:25.483Z] Copying: 124/1024 [MB] (18 MBps) [2024-12-06T22:17:26.423Z] Copying: 147/1024 [MB] (22 MBps) [2024-12-06T22:17:27.414Z] Copying: 180/1024 [MB] (33 MBps) [2024-12-06T22:17:28.358Z] Copying: 216/1024 [MB] (35 MBps) [2024-12-06T22:17:29.303Z] Copying: 244/1024 [MB] (28 MBps) [2024-12-06T22:17:30.239Z] Copying: 274/1024 [MB] (29 MBps) [2024-12-06T22:17:31.190Z] Copying: 308/1024 [MB] (34 MBps) [2024-12-06T22:17:32.133Z] Copying: 339/1024 [MB] (30 MBps) [2024-12-06T22:17:33.518Z] Copying: 371/1024 [MB] (32 MBps) [2024-12-06T22:17:34.460Z] Copying: 399/1024 [MB] (27 MBps) [2024-12-06T22:17:35.402Z] Copying: 425/1024 [MB] (26 MBps) [2024-12-06T22:17:36.343Z] Copying: 460/1024 [MB] (35 MBps) [2024-12-06T22:17:37.278Z] Copying: 495/1024 [MB] (34 MBps) [2024-12-06T22:17:38.220Z] Copying: 520/1024 [MB] (25 MBps) [2024-12-06T22:17:39.161Z] Copying: 545/1024 [MB] (24 MBps) [2024-12-06T22:17:40.546Z] Copying: 569/1024 [MB] (24 MBps) [2024-12-06T22:17:41.166Z] Copying: 603/1024 [MB] (33 MBps) [2024-12-06T22:17:42.552Z] Copying: 637/1024 [MB] (34 MBps) [2024-12-06T22:17:43.495Z] Copying: 672/1024 [MB] (35 MBps) [2024-12-06T22:17:44.438Z] Copying: 706/1024 [MB] (33 MBps) [2024-12-06T22:17:45.380Z] Copying: 728/1024 [MB] (22 MBps) [2024-12-06T22:17:46.323Z] Copying: 757/1024 [MB] (29 MBps) [2024-12-06T22:17:47.267Z] Copying: 786/1024 [MB] (28 MBps) [2024-12-06T22:17:48.209Z] Copying: 816/1024 [MB] (29 MBps) [2024-12-06T22:17:49.153Z] Copying: 844/1024 [MB] (27 MBps) [2024-12-06T22:17:50.541Z] Copying: 874/1024 [MB] (30 MBps) [2024-12-06T22:17:51.488Z] Copying: 908/1024 [MB] (33 MBps) [2024-12-06T22:17:52.431Z] Copying: 942/1024 [MB] (34 MBps) [2024-12-06T22:17:53.375Z] Copying: 973/1024 [MB] (31 MBps) [2024-12-06T22:17:53.974Z] Copying: 1002/1024 [MB] (28 MBps) [2024-12-06T22:17:54.545Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:27:21.673 00:27:21.673 22:17:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:21.673 22:17:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:21.935 22:17:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:21.935 [2024-12-06 22:17:54.802190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.935 [2024-12-06 22:17:54.802361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:21.935 [2024-12-06 22:17:54.802381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:21.935 [2024-12-06 22:17:54.802392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.935 [2024-12-06 22:17:54.802424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.196 [2024-12-06 22:17:54.805058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.805086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.196 [2024-12-06 22:17:54.805098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:27:22.196 [2024-12-06 22:17:54.805106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.807899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.807931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.196 [2024-12-06 22:17:54.807942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:27:22.196 [2024-12-06 22:17:54.807950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.823896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.823928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.196 [2024-12-06 22:17:54.823940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:27:22.196 [2024-12-06 22:17:54.823947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.830192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.830219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.196 [2024-12-06 22:17:54.830231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:27:22.196 [2024-12-06 22:17:54.830240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.853407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.853438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.196 [2024-12-06 22:17:54.853451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.100 ms 00:27:22.196 [2024-12-06 22:17:54.853458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.868088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.868309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.196 [2024-12-06 22:17:54.868333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.593 ms 00:27:22.196 [2024-12-06 22:17:54.868341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.868488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.868499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.196 [2024-12-06 22:17:54.868509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:22.196 [2024-12-06 22:17:54.868516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.891546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.891667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.196 [2024-12-06 22:17:54.891684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:27:22.196 [2024-12-06 22:17:54.891692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.196 [2024-12-06 22:17:54.913793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.196 [2024-12-06 22:17:54.913823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.197 [2024-12-06 22:17:54.913837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.071 ms 00:27:22.197 [2024-12-06 22:17:54.913844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.197 [2024-12-06 22:17:54.935475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.197 [2024-12-06 22:17:54.935504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.197 [2024-12-06 22:17:54.935516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.596 ms 00:27:22.197 [2024-12-06 22:17:54.935523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.197 [2024-12-06 22:17:54.957806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.197 [2024-12-06 22:17:54.957912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.197 [2024-12-06 22:17:54.957929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.179 ms 00:27:22.197 [2024-12-06 22:17:54.957936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.197 [2024-12-06 22:17:54.957965] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.197 [2024-12-06 22:17:54.957979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.957990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.957998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.197 [2024-12-06 22:17:54.958666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.198 [2024-12-06 22:17:54.958849] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.198 [2024-12-06 22:17:54.958858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c260cfd-9d08-4b71-afca-35499d570822 00:27:22.198 [2024-12-06 22:17:54.958865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.198 [2024-12-06 22:17:54.958875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.198 [2024-12-06 22:17:54.958884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.198 [2024-12-06 22:17:54.958892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.198 [2024-12-06 22:17:54.958899] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.198 [2024-12-06 22:17:54.958908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.198 [2024-12-06 22:17:54.958915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.198 [2024-12-06 22:17:54.958922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.198 [2024-12-06 22:17:54.958929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.198 [2024-12-06 22:17:54.958937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.198 [2024-12-06 22:17:54.958944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.198 [2024-12-06 22:17:54.958954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:27:22.198 [2024-12-06 22:17:54.958961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:54.971300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.198 [2024-12-06 22:17:54.971331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:22.198 [2024-12-06 22:17:54.971343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.307 ms 00:27:22.198 [2024-12-06 22:17:54.971350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:54.971702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.198 [2024-12-06 22:17:54.971721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:22.198 [2024-12-06 22:17:54.971731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:22.198 [2024-12-06 22:17:54.971738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:55.013212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.198 [2024-12-06 22:17:55.013244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.198 [2024-12-06 22:17:55.013257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.198 [2024-12-06 22:17:55.013265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:55.013319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.198 [2024-12-06 22:17:55.013328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.198 [2024-12-06 22:17:55.013337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.198 [2024-12-06 22:17:55.013344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:55.013408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.198 [2024-12-06 22:17:55.013419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.198 [2024-12-06 22:17:55.013428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.198 [2024-12-06 22:17:55.013435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.198 [2024-12-06 22:17:55.013455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.198 [2024-12-06 22:17:55.013462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.198 [2024-12-06 22:17:55.013471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.198 [2024-12-06 22:17:55.013478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.090060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.090095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.458 [2024-12-06 22:17:55.090107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.090114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.458 [2024-12-06 22:17:55.153205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.458 [2024-12-06 22:17:55.153334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.458 [2024-12-06 22:17:55.153406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.458 [2024-12-06 22:17:55.153522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:22.458 [2024-12-06 22:17:55.153581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.458 [2024-12-06 22:17:55.153640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.458 [2024-12-06 22:17:55.153700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.458 [2024-12-06 22:17:55.153709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.458 [2024-12-06 22:17:55.153716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.458 [2024-12-06 22:17:55.153840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.628 ms, result 0 00:27:22.458 true 00:27:22.458 22:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80235 00:27:22.458 22:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80235 00:27:22.458 22:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:22.458 [2024-12-06 22:17:55.251065] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:27:22.458 [2024-12-06 22:17:55.251195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80839 ] 00:27:22.718 [2024-12-06 22:17:55.410634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.718 [2024-12-06 22:17:55.503127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.105  [2024-12-06T22:17:57.920Z] Copying: 188/1024 [MB] (188 MBps) [2024-12-06T22:17:58.858Z] Copying: 395/1024 [MB] (206 MBps) [2024-12-06T22:17:59.790Z] Copying: 652/1024 [MB] (257 MBps) [2024-12-06T22:18:00.355Z] Copying: 906/1024 [MB] (254 MBps) [2024-12-06T22:18:00.923Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:27:28.051 00:27:28.051 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80235 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:28.051 22:18:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.051 [2024-12-06 22:18:00.843562] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:27:28.051 [2024-12-06 22:18:00.843683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80898 ] 00:27:28.309 [2024-12-06 22:18:00.998710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.309 [2024-12-06 22:18:01.075504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.566 [2024-12-06 22:18:01.286194] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.566 [2024-12-06 22:18:01.286248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.566 [2024-12-06 22:18:01.348827] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:28.566 [2024-12-06 22:18:01.349098] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:28.566 [2024-12-06 22:18:01.349232] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:28.825 [2024-12-06 22:18:01.520309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.825 [2024-12-06 22:18:01.520346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.825 [2024-12-06 22:18:01.520356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:28.825 [2024-12-06 22:18:01.520365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.825 [2024-12-06 22:18:01.520400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.825 [2024-12-06 22:18:01.520408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.825 [2024-12-06 22:18:01.520415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:28.825 [2024-12-06 22:18:01.520421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.825 [2024-12-06 22:18:01.520433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.825 [2024-12-06 22:18:01.520933] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.826 [2024-12-06 22:18:01.520945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.520951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.826 [2024-12-06 22:18:01.520957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:27:28.826 [2024-12-06 22:18:01.520962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.521947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:28.826 [2024-12-06 22:18:01.531445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.531480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:28.826 [2024-12-06 22:18:01.531489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.498 ms 00:27:28.826 [2024-12-06 22:18:01.531495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.531538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.531545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:28.826 [2024-12-06 22:18:01.531552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:28.826 [2024-12-06 22:18:01.531557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.535952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.535978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.826 [2024-12-06 22:18:01.535986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.353 ms 00:27:28.826 [2024-12-06 22:18:01.535991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.536046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.536053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.826 [2024-12-06 22:18:01.536059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:28.826 [2024-12-06 22:18:01.536065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.536097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.536104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.826 [2024-12-06 22:18:01.536110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.826 [2024-12-06 22:18:01.536132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.536146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.826 [2024-12-06 22:18:01.538721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.538744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.826 [2024-12-06 22:18:01.538751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:27:28.826 [2024-12-06 22:18:01.538757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.538784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.538792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.826 [2024-12-06 22:18:01.538798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:28.826 [2024-12-06 22:18:01.538804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.538819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:28.826 [2024-12-06 22:18:01.538834] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:28.826 [2024-12-06 22:18:01.538860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:28.826 [2024-12-06 22:18:01.538871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:28.826 [2024-12-06 22:18:01.538951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.826 [2024-12-06 22:18:01.538958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.826 [2024-12-06 22:18:01.538967] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:28.826 [2024-12-06 22:18:01.538977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.826 [2024-12-06 22:18:01.538983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.826 [2024-12-06 22:18:01.538989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:28.826 [2024-12-06 22:18:01.538995] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.826 [2024-12-06 22:18:01.539000] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.826 [2024-12-06 22:18:01.539006] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.826 [2024-12-06 22:18:01.539012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.539017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.826 [2024-12-06 22:18:01.539023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:27:28.826 [2024-12-06 22:18:01.539029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.539091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.826 [2024-12-06 22:18:01.539099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.826 [2024-12-06 22:18:01.539105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:28.826 [2024-12-06 22:18:01.539110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.826 [2024-12-06 22:18:01.539198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.826 [2024-12-06 22:18:01.539207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.826 [2024-12-06 22:18:01.539213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.826 [2024-12-06 22:18:01.539219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.826 [2024-12-06 22:18:01.539225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.826 [2024-12-06 22:18:01.539230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.826 [2024-12-06 22:18:01.539235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:28.826 [2024-12-06 22:18:01.539241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.826 [2024-12-06 22:18:01.539247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:28.826 [2024-12-06 22:18:01.539256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.826 [2024-12-06 22:18:01.539261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.826 [2024-12-06 22:18:01.539270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:28.826 [2024-12-06 22:18:01.539274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.826 [2024-12-06 22:18:01.539280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.826 [2024-12-06 22:18:01.539286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:28.826 [2024-12-06 22:18:01.539291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.826 [2024-12-06 22:18:01.539296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.826 [2024-12-06 22:18:01.539301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:28.826 [2024-12-06 22:18:01.539306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.826 [2024-12-06 22:18:01.539311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.826 [2024-12-06 22:18:01.539317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.827 [2024-12-06 22:18:01.539332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.827 [2024-12-06 22:18:01.539347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.827 [2024-12-06 22:18:01.539363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.827 [2024-12-06 22:18:01.539377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.827 [2024-12-06 22:18:01.539386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.827 [2024-12-06 22:18:01.539391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:28.827 [2024-12-06 22:18:01.539396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.827 [2024-12-06 22:18:01.539400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.827 [2024-12-06 22:18:01.539405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:28.827 [2024-12-06 22:18:01.539410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.827 [2024-12-06 22:18:01.539420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:28.827 [2024-12-06 22:18:01.539425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539429] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.827 [2024-12-06 22:18:01.539435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.827 [2024-12-06 22:18:01.539442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.827 [2024-12-06 22:18:01.539454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.827 [2024-12-06 22:18:01.539459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.827 [2024-12-06 22:18:01.539464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.827 [2024-12-06 22:18:01.539469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.827 [2024-12-06 22:18:01.539474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.827 [2024-12-06 22:18:01.539479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.827 [2024-12-06 22:18:01.539485] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.827 [2024-12-06 22:18:01.539492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:28.827 [2024-12-06 22:18:01.539503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:28.827 [2024-12-06 22:18:01.539509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:28.827 [2024-12-06 22:18:01.539514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:28.827 [2024-12-06 22:18:01.539519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:28.827 [2024-12-06 22:18:01.539525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:28.827 [2024-12-06 22:18:01.539530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:28.827 [2024-12-06 22:18:01.539535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:28.827 [2024-12-06 22:18:01.539540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:28.827 [2024-12-06 22:18:01.539545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:28.827 [2024-12-06 22:18:01.539571] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.827 [2024-12-06 22:18:01.539577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.827 [2024-12-06 22:18:01.539589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.827 [2024-12-06 22:18:01.539594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.827 [2024-12-06 22:18:01.539599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.827 [2024-12-06 22:18:01.539605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.827 [2024-12-06 22:18:01.539610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.827 [2024-12-06 22:18:01.539615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:27:28.827 [2024-12-06 22:18:01.539621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.827 [2024-12-06 22:18:01.560320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.827 [2024-12-06 22:18:01.560348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.827 [2024-12-06 22:18:01.560356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.666 ms 00:27:28.827 [2024-12-06 22:18:01.560363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.827 [2024-12-06 22:18:01.560431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.827 [2024-12-06 22:18:01.560438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:28.827 [2024-12-06 22:18:01.560444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:28.827 [2024-12-06 22:18:01.560450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.827 [2024-12-06 22:18:01.598490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.827 [2024-12-06 22:18:01.598527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.827 [2024-12-06 22:18:01.598539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.996 ms 00:27:28.827 [2024-12-06 22:18:01.598546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.827 [2024-12-06 22:18:01.598584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.827 [2024-12-06 22:18:01.598591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.827 [2024-12-06 22:18:01.598598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:28.827 [2024-12-06 22:18:01.598604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.827 [2024-12-06 22:18:01.598921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.598940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.828 [2024-12-06 22:18:01.598948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:28.828 [2024-12-06 22:18:01.598956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.599055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.599062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.828 [2024-12-06 22:18:01.599069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:28.828 [2024-12-06 22:18:01.599074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.609530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.609641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.828 [2024-12-06 22:18:01.609653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.440 ms 00:27:28.828 [2024-12-06 22:18:01.609659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.619190] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:28.828 [2024-12-06 22:18:01.619219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:28.828 [2024-12-06 22:18:01.619229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.619236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:28.828 [2024-12-06 22:18:01.619243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.493 ms 00:27:28.828 [2024-12-06 22:18:01.619248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.637695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.637731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:28.828 [2024-12-06 22:18:01.637741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:27:28.828 [2024-12-06 22:18:01.637748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.646443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.646469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:28.828 [2024-12-06 22:18:01.646477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.663 ms 00:27:28.828 [2024-12-06 22:18:01.646482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.655100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.655126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:28.828 [2024-12-06 22:18:01.655134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.591 ms 00:27:28.828 [2024-12-06 22:18:01.655140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.828 [2024-12-06 22:18:01.655618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.828 [2024-12-06 22:18:01.655638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:28.828 [2024-12-06 22:18:01.655645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:27:28.828 [2024-12-06 22:18:01.655652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.699551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.699596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:29.087 [2024-12-06 22:18:01.699607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.885 ms 00:27:29.087 [2024-12-06 22:18:01.699614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.707467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:29.087 [2024-12-06 22:18:01.709466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.709485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:29.087 [2024-12-06 22:18:01.709494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.806 ms 00:27:29.087 [2024-12-06 22:18:01.709504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.709574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.709582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:29.087 [2024-12-06 22:18:01.709589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:29.087 [2024-12-06 22:18:01.709595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.709636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.709643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:29.087 [2024-12-06 22:18:01.709650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:29.087 [2024-12-06 22:18:01.709656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.709672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.709679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:29.087 [2024-12-06 22:18:01.709685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:29.087 [2024-12-06 22:18:01.709691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.709724] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:29.087 [2024-12-06 22:18:01.709732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.709738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:29.087 [2024-12-06 22:18:01.709744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:29.087 [2024-12-06 22:18:01.709753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.727911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.728040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:29.087 [2024-12-06 22:18:01.728055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.143 ms 00:27:29.087 [2024-12-06 22:18:01.728062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.728130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.087 [2024-12-06 22:18:01.728138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:29.087 [2024-12-06 22:18:01.728146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:29.087 [2024-12-06 22:18:01.728152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.087 [2024-12-06 22:18:01.729011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.373 ms, result 0 00:27:30.024  [2024-12-06T22:18:03.834Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-06T22:18:04.772Z] Copying: 48/1024 [MB] (22 MBps) [2024-12-06T22:18:06.152Z] Copying: 95/1024 [MB] (46 MBps) [2024-12-06T22:18:07.091Z] Copying: 143/1024 [MB] (48 MBps) [2024-12-06T22:18:08.061Z] Copying: 158/1024 [MB] (14 MBps) [2024-12-06T22:18:08.993Z] Copying: 172/1024 [MB] (14 MBps) [2024-12-06T22:18:09.930Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-06T22:18:10.867Z] Copying: 195/1024 [MB] (11 MBps) [2024-12-06T22:18:11.805Z] Copying: 211/1024 [MB] (16 MBps) [2024-12-06T22:18:12.801Z] Copying: 223/1024 [MB] (12 MBps) [2024-12-06T22:18:13.755Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-06T22:18:15.137Z] Copying: 247/1024 [MB] (11 MBps) [2024-12-06T22:18:16.074Z] Copying: 262/1024 [MB] (15 MBps) [2024-12-06T22:18:17.013Z] Copying: 279/1024 [MB] (16 MBps) [2024-12-06T22:18:17.954Z] Copying: 291/1024 [MB] (12 MBps) [2024-12-06T22:18:18.892Z] Copying: 308200/1048576 [kB] (9860 kBps) [2024-12-06T22:18:19.829Z] Copying: 311/1024 [MB] (10 MBps) [2024-12-06T22:18:20.761Z] Copying: 322/1024 [MB] (10 MBps) [2024-12-06T22:18:22.135Z] Copying: 333/1024 [MB] (10 MBps) [2024-12-06T22:18:23.077Z] Copying: 343/1024 [MB] (10 MBps) [2024-12-06T22:18:24.011Z] Copying: 356/1024 [MB] (12 MBps) [2024-12-06T22:18:24.945Z] Copying: 369/1024 [MB] (12 MBps) [2024-12-06T22:18:25.885Z] Copying: 379/1024 [MB] (10 MBps) [2024-12-06T22:18:26.817Z] Copying: 391/1024 [MB] (11 MBps) [2024-12-06T22:18:27.752Z] Copying: 404/1024 [MB] (12 MBps) [2024-12-06T22:18:29.135Z] Copying: 417/1024 [MB] (12 MBps) [2024-12-06T22:18:30.064Z] Copying: 431/1024 [MB] (14 MBps) [2024-12-06T22:18:30.994Z] Copying: 473/1024 [MB] (41 MBps) [2024-12-06T22:18:31.922Z] Copying: 522/1024 [MB] (48 MBps) [2024-12-06T22:18:32.852Z] Copying: 543/1024 [MB] (21 MBps) [2024-12-06T22:18:33.784Z] Copying: 561/1024 [MB] (17 MBps) [2024-12-06T22:18:35.154Z] Copying: 573/1024 [MB] (11 MBps) [2024-12-06T22:18:36.094Z] Copying: 591/1024 [MB] (18 MBps) [2024-12-06T22:18:37.052Z] Copying: 605/1024 [MB] (14 MBps) [2024-12-06T22:18:37.984Z] Copying: 616/1024 [MB] (10 MBps) [2024-12-06T22:18:38.917Z] Copying: 626/1024 [MB] (10 MBps) [2024-12-06T22:18:39.853Z] Copying: 644/1024 [MB] (17 MBps) [2024-12-06T22:18:40.785Z] Copying: 659/1024 [MB] (15 MBps) [2024-12-06T22:18:42.157Z] Copying: 680/1024 [MB] (20 MBps) [2024-12-06T22:18:43.102Z] Copying: 693/1024 [MB] (13 MBps) [2024-12-06T22:18:44.035Z] Copying: 720/1024 [MB] (26 MBps) [2024-12-06T22:18:44.968Z] Copying: 743/1024 [MB] (23 MBps) [2024-12-06T22:18:45.995Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-06T22:18:46.927Z] Copying: 782944/1048576 [kB] (10232 kBps) [2024-12-06T22:18:47.857Z] Copying: 792988/1048576 [kB] (10044 kBps) [2024-12-06T22:18:48.787Z] Copying: 802992/1048576 [kB] (10004 kBps) [2024-12-06T22:18:50.157Z] Copying: 812560/1048576 [kB] (9568 kBps) [2024-12-06T22:18:51.089Z] Copying: 822180/1048576 [kB] (9620 kBps) [2024-12-06T22:18:52.021Z] Copying: 813/1024 [MB] (10 MBps) [2024-12-06T22:18:53.024Z] Copying: 825/1024 [MB] (12 MBps) [2024-12-06T22:18:53.956Z] Copying: 836/1024 [MB] (10 MBps) [2024-12-06T22:18:54.888Z] Copying: 847/1024 [MB] (10 MBps) [2024-12-06T22:18:55.821Z] Copying: 877816/1048576 [kB] (10088 kBps) [2024-12-06T22:18:56.755Z] Copying: 868/1024 [MB] (11 MBps) [2024-12-06T22:18:58.127Z] Copying: 880/1024 [MB] (12 MBps) [2024-12-06T22:18:59.059Z] Copying: 896/1024 [MB] (15 MBps) [2024-12-06T22:18:59.994Z] Copying: 911/1024 [MB] (14 MBps) [2024-12-06T22:19:00.928Z] Copying: 922/1024 [MB] (11 MBps) [2024-12-06T22:19:01.862Z] Copying: 934/1024 [MB] (12 MBps) [2024-12-06T22:19:02.796Z] Copying: 946/1024 [MB] (12 MBps) [2024-12-06T22:19:04.203Z] Copying: 958/1024 [MB] (11 MBps) [2024-12-06T22:19:04.768Z] Copying: 969/1024 [MB] (11 MBps) [2024-12-06T22:19:06.146Z] Copying: 980/1024 [MB] (11 MBps) [2024-12-06T22:19:07.078Z] Copying: 993/1024 [MB] (12 MBps) [2024-12-06T22:19:08.009Z] Copying: 1004/1024 [MB] (11 MBps) [2024-12-06T22:19:08.575Z] Copying: 1017/1024 [MB] (12 MBps) [2024-12-06T22:19:08.575Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 22:19:08.337442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.703 [2024-12-06 22:19:08.337494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:35.703 [2024-12-06 22:19:08.337509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:35.703 [2024-12-06 22:19:08.337527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.703 [2024-12-06 22:19:08.337548] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:35.703 [2024-12-06 22:19:08.340182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.703 [2024-12-06 22:19:08.340207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:35.704 [2024-12-06 22:19:08.340218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:28:35.704 [2024-12-06 22:19:08.340227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.342381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.342412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:35.704 [2024-12-06 22:19:08.342421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:28:35.704 [2024-12-06 22:19:08.342428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.359809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.359944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:35.704 [2024-12-06 22:19:08.359962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.360 ms 00:28:35.704 [2024-12-06 22:19:08.359971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.366169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.366201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:35.704 [2024-12-06 22:19:08.366212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:28:35.704 [2024-12-06 22:19:08.366220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.390651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.390684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:35.704 [2024-12-06 22:19:08.390694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.379 ms 00:28:35.704 [2024-12-06 22:19:08.390702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.404446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.404491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:35.704 [2024-12-06 22:19:08.404501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.714 ms 00:28:35.704 [2024-12-06 22:19:08.404508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.407024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.407054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:35.704 [2024-12-06 22:19:08.407064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:28:35.704 [2024-12-06 22:19:08.407072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.430355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.430384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:35.704 [2024-12-06 22:19:08.430395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.270 ms 00:28:35.704 [2024-12-06 22:19:08.430412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.453208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.453346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:35.704 [2024-12-06 22:19:08.453362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.765 ms 00:28:35.704 [2024-12-06 22:19:08.453371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.475739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.475768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.704 [2024-12-06 22:19:08.475777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.340 ms 00:28:35.704 [2024-12-06 22:19:08.475784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.498248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.704 [2024-12-06 22:19:08.498276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:35.704 [2024-12-06 22:19:08.498286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.413 ms 00:28:35.704 [2024-12-06 22:19:08.498293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.704 [2024-12-06 22:19:08.498322] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:35.704 [2024-12-06 22:19:08.498337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:28:35.704 [2024-12-06 22:19:08.498348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:35.704 [2024-12-06 22:19:08.498824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.498994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:35.705 [2024-12-06 22:19:08.499206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:35.705 [2024-12-06 22:19:08.499214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c260cfd-9d08-4b71-afca-35499d570822 00:28:35.705 [2024-12-06 22:19:08.499229] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:28:35.705 [2024-12-06 22:19:08.499237] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:28:35.705 [2024-12-06 22:19:08.499243] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:28:35.705 [2024-12-06 22:19:08.499252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:28:35.705 [2024-12-06 22:19:08.499259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:35.705 [2024-12-06 22:19:08.499269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:35.705 [2024-12-06 22:19:08.499276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:35.705 [2024-12-06 22:19:08.499283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:35.705 [2024-12-06 22:19:08.499289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:35.705 [2024-12-06 22:19:08.499296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.705 [2024-12-06 22:19:08.499303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:35.705 [2024-12-06 22:19:08.499311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:28:35.705 [2024-12-06 22:19:08.499318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.511620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.705 [2024-12-06 22:19:08.511737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:35.705 [2024-12-06 22:19:08.511752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:28:35.705 [2024-12-06 22:19:08.511766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.512110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.705 [2024-12-06 22:19:08.512119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:35.705 [2024-12-06 22:19:08.512128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:28:35.705 [2024-12-06 22:19:08.512142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.544719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.705 [2024-12-06 22:19:08.544757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.705 [2024-12-06 22:19:08.544768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.705 [2024-12-06 22:19:08.544778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.544835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.705 [2024-12-06 22:19:08.544844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.705 [2024-12-06 22:19:08.544853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.705 [2024-12-06 22:19:08.544861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.544934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.705 [2024-12-06 22:19:08.544946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.705 [2024-12-06 22:19:08.544958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.705 [2024-12-06 22:19:08.544966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.705 [2024-12-06 22:19:08.544982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.705 [2024-12-06 22:19:08.544990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.705 [2024-12-06 22:19:08.544999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.705 [2024-12-06 22:19:08.545007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.621586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.621627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.963 [2024-12-06 22:19:08.621642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.621650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.963 [2024-12-06 22:19:08.685156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.963 [2024-12-06 22:19:08.685240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.963 [2024-12-06 22:19:08.685317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.963 [2024-12-06 22:19:08.685425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:35.963 [2024-12-06 22:19:08.685479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.963 [2024-12-06 22:19:08.685535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.963 [2024-12-06 22:19:08.685591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.963 [2024-12-06 22:19:08.685599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.963 [2024-12-06 22:19:08.685607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.963 [2024-12-06 22:19:08.685716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.244 ms, result 0 00:28:37.351 00:28:37.351 00:28:37.351 22:19:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:39.255 22:19:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:39.513 [2024-12-06 22:19:12.179470] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:28:39.513 [2024-12-06 22:19:12.179768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81627 ] 00:28:39.513 [2024-12-06 22:19:12.342336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.771 [2024-12-06 22:19:12.443079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.029 [2024-12-06 22:19:12.703307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.029 [2024-12-06 22:19:12.703557] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.029 [2024-12-06 22:19:12.860784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.029 [2024-12-06 22:19:12.860835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:40.029 [2024-12-06 22:19:12.860849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:40.029 [2024-12-06 22:19:12.860857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.029 [2024-12-06 22:19:12.860911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.029 [2024-12-06 22:19:12.860924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.029 [2024-12-06 22:19:12.860932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:40.030 [2024-12-06 22:19:12.860939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.860959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:40.030 [2024-12-06 22:19:12.862256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:40.030 [2024-12-06 22:19:12.862300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.862310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.030 [2024-12-06 22:19:12.862320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:28:40.030 [2024-12-06 22:19:12.862327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.863457] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:40.030 [2024-12-06 22:19:12.876312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.876364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:40.030 [2024-12-06 22:19:12.876376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.854 ms 00:28:40.030 [2024-12-06 22:19:12.876384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.876462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.876472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:40.030 [2024-12-06 22:19:12.876480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:40.030 [2024-12-06 22:19:12.876491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.881760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.881931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.030 [2024-12-06 22:19:12.881948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:28:40.030 [2024-12-06 22:19:12.881961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.882039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.882048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.030 [2024-12-06 22:19:12.882057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:40.030 [2024-12-06 22:19:12.882064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.882130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.882140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:40.030 [2024-12-06 22:19:12.882149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:40.030 [2024-12-06 22:19:12.882156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.882199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:40.030 [2024-12-06 22:19:12.885481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.885511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.030 [2024-12-06 22:19:12.885523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:28:40.030 [2024-12-06 22:19:12.885531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.885564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.885572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:40.030 [2024-12-06 22:19:12.885581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:40.030 [2024-12-06 22:19:12.885588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.885609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:40.030 [2024-12-06 22:19:12.885627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:40.030 [2024-12-06 22:19:12.885663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:40.030 [2024-12-06 22:19:12.885680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:40.030 [2024-12-06 22:19:12.885781] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:40.030 [2024-12-06 22:19:12.885791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:40.030 [2024-12-06 22:19:12.885801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:40.030 [2024-12-06 22:19:12.885811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:40.030 [2024-12-06 22:19:12.885820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:40.030 [2024-12-06 22:19:12.885828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:40.030 [2024-12-06 22:19:12.885835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:40.030 [2024-12-06 22:19:12.885845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:40.030 [2024-12-06 22:19:12.885853] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:40.030 [2024-12-06 22:19:12.885860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.885868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:40.030 [2024-12-06 22:19:12.885875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:28:40.030 [2024-12-06 22:19:12.885882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.885964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.030 [2024-12-06 22:19:12.885972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:40.030 [2024-12-06 22:19:12.885979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:40.030 [2024-12-06 22:19:12.885986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.030 [2024-12-06 22:19:12.886102] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:40.030 [2024-12-06 22:19:12.886113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:40.030 [2024-12-06 22:19:12.886121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:40.030 [2024-12-06 22:19:12.886143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:40.030 [2024-12-06 22:19:12.886164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.030 [2024-12-06 22:19:12.886196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:40.030 [2024-12-06 22:19:12.886202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:40.030 [2024-12-06 22:19:12.886209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.030 [2024-12-06 22:19:12.886222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:40.030 [2024-12-06 22:19:12.886229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:40.030 [2024-12-06 22:19:12.886235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:40.030 [2024-12-06 22:19:12.886249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:40.030 [2024-12-06 22:19:12.886270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:40.030 [2024-12-06 22:19:12.886290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:40.030 [2024-12-06 22:19:12.886309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:40.030 [2024-12-06 22:19:12.886328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.030 [2024-12-06 22:19:12.886341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:40.030 [2024-12-06 22:19:12.886347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.030 [2024-12-06 22:19:12.886360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:40.030 [2024-12-06 22:19:12.886366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:40.030 [2024-12-06 22:19:12.886372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.030 [2024-12-06 22:19:12.886379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:40.030 [2024-12-06 22:19:12.886386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:40.030 [2024-12-06 22:19:12.886392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:40.030 [2024-12-06 22:19:12.886405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:40.030 [2024-12-06 22:19:12.886412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.030 [2024-12-06 22:19:12.886418] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:40.030 [2024-12-06 22:19:12.886426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:40.030 [2024-12-06 22:19:12.886433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.031 [2024-12-06 22:19:12.886440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.031 [2024-12-06 22:19:12.886447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:40.031 [2024-12-06 22:19:12.886454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:40.031 [2024-12-06 22:19:12.886460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:40.031 [2024-12-06 22:19:12.886467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:40.031 [2024-12-06 22:19:12.886474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:40.031 [2024-12-06 22:19:12.886480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:40.031 [2024-12-06 22:19:12.886488] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:40.031 [2024-12-06 22:19:12.886497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:40.031 [2024-12-06 22:19:12.886514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:40.031 [2024-12-06 22:19:12.886522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:40.031 [2024-12-06 22:19:12.886528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:40.031 [2024-12-06 22:19:12.886535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:40.031 [2024-12-06 22:19:12.886542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:40.031 [2024-12-06 22:19:12.886550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:40.031 [2024-12-06 22:19:12.886556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:40.031 [2024-12-06 22:19:12.886563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:40.031 [2024-12-06 22:19:12.886570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:40.031 [2024-12-06 22:19:12.886605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:40.031 [2024-12-06 22:19:12.886613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:40.031 [2024-12-06 22:19:12.886628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:40.031 [2024-12-06 22:19:12.886635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:40.031 [2024-12-06 22:19:12.886642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:40.031 [2024-12-06 22:19:12.886649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.031 [2024-12-06 22:19:12.886657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:40.031 [2024-12-06 22:19:12.886664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:28:40.031 [2024-12-06 22:19:12.886671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.912909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.912954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.290 [2024-12-06 22:19:12.912965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.193 ms 00:28:40.290 [2024-12-06 22:19:12.912977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.913071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.913079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:40.290 [2024-12-06 22:19:12.913088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:40.290 [2024-12-06 22:19:12.913095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.955338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.955518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.290 [2024-12-06 22:19:12.955537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.177 ms 00:28:40.290 [2024-12-06 22:19:12.955547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.955608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.955618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.290 [2024-12-06 22:19:12.955632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:40.290 [2024-12-06 22:19:12.955640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.956017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.956041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.290 [2024-12-06 22:19:12.956051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:40.290 [2024-12-06 22:19:12.956058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.956209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.956219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.290 [2024-12-06 22:19:12.956236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:40.290 [2024-12-06 22:19:12.956244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.969092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.969129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.290 [2024-12-06 22:19:12.969141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.828 ms 00:28:40.290 [2024-12-06 22:19:12.969148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:12.981834] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:28:40.290 [2024-12-06 22:19:12.981873] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:40.290 [2024-12-06 22:19:12.981886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:12.981894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:40.290 [2024-12-06 22:19:12.981903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:28:40.290 [2024-12-06 22:19:12.981911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.006035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.006189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:40.290 [2024-12-06 22:19:13.006208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.082 ms 00:28:40.290 [2024-12-06 22:19:13.006216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.018324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.018365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:40.290 [2024-12-06 22:19:13.018376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.076 ms 00:28:40.290 [2024-12-06 22:19:13.018383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.030204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.030242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:40.290 [2024-12-06 22:19:13.030253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.775 ms 00:28:40.290 [2024-12-06 22:19:13.030261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.030891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.030915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:40.290 [2024-12-06 22:19:13.030927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:28:40.290 [2024-12-06 22:19:13.030934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.086931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.086992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:40.290 [2024-12-06 22:19:13.087012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.977 ms 00:28:40.290 [2024-12-06 22:19:13.087021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.097777] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:40.290 [2024-12-06 22:19:13.100625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.100659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:40.290 [2024-12-06 22:19:13.100673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.549 ms 00:28:40.290 [2024-12-06 22:19:13.100682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.100791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.100802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:40.290 [2024-12-06 22:19:13.100814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:40.290 [2024-12-06 22:19:13.100821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.101415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.101439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:40.290 [2024-12-06 22:19:13.101449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:28:40.290 [2024-12-06 22:19:13.101456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.101478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.101487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:40.290 [2024-12-06 22:19:13.101495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:40.290 [2024-12-06 22:19:13.101502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.101538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:40.290 [2024-12-06 22:19:13.101548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.101555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:40.290 [2024-12-06 22:19:13.101563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:40.290 [2024-12-06 22:19:13.101570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.125138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.125191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:40.290 [2024-12-06 22:19:13.125209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.550 ms 00:28:40.290 [2024-12-06 22:19:13.125218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.125291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.290 [2024-12-06 22:19:13.125300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:40.290 [2024-12-06 22:19:13.125309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:40.290 [2024-12-06 22:19:13.125316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.290 [2024-12-06 22:19:13.126382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.172 ms, result 0 00:28:41.662  [2024-12-06T22:19:15.467Z] Copying: 972/1048576 [kB] (972 kBps) [2024-12-06T22:19:16.399Z] Copying: 1916/1048576 [kB] (944 kBps) [2024-12-06T22:19:17.333Z] Copying: 5492/1048576 [kB] (3576 kBps) [2024-12-06T22:19:18.326Z] Copying: 49/1024 [MB] (44 MBps) [2024-12-06T22:19:19.698Z] Copying: 101/1024 [MB] (51 MBps) [2024-12-06T22:19:20.630Z] Copying: 153/1024 [MB] (52 MBps) [2024-12-06T22:19:21.562Z] Copying: 209/1024 [MB] (55 MBps) [2024-12-06T22:19:22.540Z] Copying: 265/1024 [MB] (56 MBps) [2024-12-06T22:19:23.475Z] Copying: 317/1024 [MB] (52 MBps) [2024-12-06T22:19:24.406Z] Copying: 369/1024 [MB] (51 MBps) [2024-12-06T22:19:25.335Z] Copying: 420/1024 [MB] (51 MBps) [2024-12-06T22:19:26.704Z] Copying: 477/1024 [MB] (56 MBps) [2024-12-06T22:19:27.635Z] Copying: 529/1024 [MB] (52 MBps) [2024-12-06T22:19:28.570Z] Copying: 579/1024 [MB] (49 MBps) [2024-12-06T22:19:29.503Z] Copying: 626/1024 [MB] (47 MBps) [2024-12-06T22:19:30.437Z] Copying: 669/1024 [MB] (43 MBps) [2024-12-06T22:19:31.372Z] Copying: 714/1024 [MB] (45 MBps) [2024-12-06T22:19:32.745Z] Copying: 765/1024 [MB] (51 MBps) [2024-12-06T22:19:33.331Z] Copying: 819/1024 [MB] (53 MBps) [2024-12-06T22:19:34.704Z] Copying: 875/1024 [MB] (56 MBps) [2024-12-06T22:19:35.637Z] Copying: 926/1024 [MB] (51 MBps) [2024-12-06T22:19:36.570Z] Copying: 973/1024 [MB] (47 MBps) [2024-12-06T22:19:36.570Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-12-06 22:19:36.302341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.698 [2024-12-06 22:19:36.302406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:03.698 [2024-12-06 22:19:36.302420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:03.698 [2024-12-06 22:19:36.302429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.698 [2024-12-06 22:19:36.302450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:03.698 [2024-12-06 22:19:36.305080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.698 [2024-12-06 22:19:36.305114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:03.698 [2024-12-06 22:19:36.305125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:29:03.698 [2024-12-06 22:19:36.305133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.305357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.305379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:03.699 [2024-12-06 22:19:36.305387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:29:03.699 [2024-12-06 22:19:36.305394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.321049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.321113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:03.699 [2024-12-06 22:19:36.321126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.637 ms 00:29:03.699 [2024-12-06 22:19:36.321134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.329022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.329062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:03.699 [2024-12-06 22:19:36.329080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.860 ms 00:29:03.699 [2024-12-06 22:19:36.329090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.352961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.353012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:03.699 [2024-12-06 22:19:36.353024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.810 ms 00:29:03.699 [2024-12-06 22:19:36.353032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.367451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.367502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:03.699 [2024-12-06 22:19:36.367515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.367 ms 00:29:03.699 [2024-12-06 22:19:36.367523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.369645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.369684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:03.699 [2024-12-06 22:19:36.369695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:29:03.699 [2024-12-06 22:19:36.369710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.393691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.393742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:03.699 [2024-12-06 22:19:36.393754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.964 ms 00:29:03.699 [2024-12-06 22:19:36.393762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.416979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.417028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:03.699 [2024-12-06 22:19:36.417040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:29:03.699 [2024-12-06 22:19:36.417048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.439988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.440033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:03.699 [2024-12-06 22:19:36.440045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.889 ms 00:29:03.699 [2024-12-06 22:19:36.440053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.462848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.699 [2024-12-06 22:19:36.462896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:03.699 [2024-12-06 22:19:36.462908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.722 ms 00:29:03.699 [2024-12-06 22:19:36.462915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.699 [2024-12-06 22:19:36.462961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:03.699 [2024-12-06 22:19:36.462975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:03.699 [2024-12-06 22:19:36.462985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:03.699 [2024-12-06 22:19:36.462994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:03.699 [2024-12-06 22:19:36.463395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:03.700 [2024-12-06 22:19:36.463744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:03.700 [2024-12-06 22:19:36.463752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c260cfd-9d08-4b71-afca-35499d570822 00:29:03.700 [2024-12-06 22:19:36.463760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:03.700 [2024-12-06 22:19:36.463767] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:29:03.700 [2024-12-06 22:19:36.463778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:29:03.700 [2024-12-06 22:19:36.463786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:29:03.700 [2024-12-06 22:19:36.463793] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:03.700 [2024-12-06 22:19:36.463807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:03.700 [2024-12-06 22:19:36.463815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:03.700 [2024-12-06 22:19:36.463821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:03.700 [2024-12-06 22:19:36.463828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:03.700 [2024-12-06 22:19:36.463835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.700 [2024-12-06 22:19:36.463842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:03.700 [2024-12-06 22:19:36.463850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:29:03.700 [2024-12-06 22:19:36.463857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.475909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.700 [2024-12-06 22:19:36.475952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:03.700 [2024-12-06 22:19:36.475962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.034 ms 00:29:03.700 [2024-12-06 22:19:36.475970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.476332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.700 [2024-12-06 22:19:36.476347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:03.700 [2024-12-06 22:19:36.476355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:29:03.700 [2024-12-06 22:19:36.476362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.508724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.700 [2024-12-06 22:19:36.508778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.700 [2024-12-06 22:19:36.508788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.700 [2024-12-06 22:19:36.508796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.508866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.700 [2024-12-06 22:19:36.508874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.700 [2024-12-06 22:19:36.508882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.700 [2024-12-06 22:19:36.508889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.508956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.700 [2024-12-06 22:19:36.508966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.700 [2024-12-06 22:19:36.508974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.700 [2024-12-06 22:19:36.508981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.700 [2024-12-06 22:19:36.508996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.700 [2024-12-06 22:19:36.509004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.700 [2024-12-06 22:19:36.509011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.700 [2024-12-06 22:19:36.509018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.586740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.586794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.959 [2024-12-06 22:19:36.586806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.586814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.649953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.649997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.959 [2024-12-06 22:19:36.650008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.959 [2024-12-06 22:19:36.650107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.959 [2024-12-06 22:19:36.650163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.959 [2024-12-06 22:19:36.650308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:03.959 [2024-12-06 22:19:36.650359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.959 [2024-12-06 22:19:36.650420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.959 [2024-12-06 22:19:36.650473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.959 [2024-12-06 22:19:36.650481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.959 [2024-12-06 22:19:36.650488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.959 [2024-12-06 22:19:36.650596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.239 ms, result 0 00:29:05.853 00:29:05.853 00:29:05.853 22:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:07.754 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:07.754 22:19:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:07.754 [2024-12-06 22:19:40.498319] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:29:07.754 [2024-12-06 22:19:40.498423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81921 ] 00:29:08.013 [2024-12-06 22:19:40.647272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.013 [2024-12-06 22:19:40.731098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.272 [2024-12-06 22:19:40.946170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.272 [2024-12-06 22:19:40.946245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.272 [2024-12-06 22:19:41.099852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.099913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:08.272 [2024-12-06 22:19:41.099926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:08.272 [2024-12-06 22:19:41.099935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.099985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.099996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:08.272 [2024-12-06 22:19:41.100005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:08.272 [2024-12-06 22:19:41.100012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.100031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:08.272 [2024-12-06 22:19:41.100721] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:08.272 [2024-12-06 22:19:41.100743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.100751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:08.272 [2024-12-06 22:19:41.100760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:29:08.272 [2024-12-06 22:19:41.100767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.101795] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:08.272 [2024-12-06 22:19:41.114309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.114353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:08.272 [2024-12-06 22:19:41.114366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.515 ms 00:29:08.272 [2024-12-06 22:19:41.114375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.114448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.114458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:08.272 [2024-12-06 22:19:41.114467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:08.272 [2024-12-06 22:19:41.114474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.119434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.119470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:08.272 [2024-12-06 22:19:41.119480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:29:08.272 [2024-12-06 22:19:41.119492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.119567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.119576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:08.272 [2024-12-06 22:19:41.119584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:08.272 [2024-12-06 22:19:41.119592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.119636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.119645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:08.272 [2024-12-06 22:19:41.119653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:08.272 [2024-12-06 22:19:41.119660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.119685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:08.272 [2024-12-06 22:19:41.122910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.122943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:08.272 [2024-12-06 22:19:41.122955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:29:08.272 [2024-12-06 22:19:41.122962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.122997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.123005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:08.272 [2024-12-06 22:19:41.123013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:08.272 [2024-12-06 22:19:41.123020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.123041] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:08.272 [2024-12-06 22:19:41.123059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:08.272 [2024-12-06 22:19:41.123094] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:08.272 [2024-12-06 22:19:41.123111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:08.272 [2024-12-06 22:19:41.123225] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:08.272 [2024-12-06 22:19:41.123239] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:08.272 [2024-12-06 22:19:41.123249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:08.272 [2024-12-06 22:19:41.123260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:08.272 [2024-12-06 22:19:41.123269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:08.272 [2024-12-06 22:19:41.123276] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:08.272 [2024-12-06 22:19:41.123284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:08.272 [2024-12-06 22:19:41.123294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:08.272 [2024-12-06 22:19:41.123301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:08.272 [2024-12-06 22:19:41.123309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.123317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:08.272 [2024-12-06 22:19:41.123324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:29:08.272 [2024-12-06 22:19:41.123331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.123414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.272 [2024-12-06 22:19:41.123423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:08.272 [2024-12-06 22:19:41.123430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:08.272 [2024-12-06 22:19:41.123437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.272 [2024-12-06 22:19:41.123541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:08.272 [2024-12-06 22:19:41.123551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:08.272 [2024-12-06 22:19:41.123559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.272 [2024-12-06 22:19:41.123567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.272 [2024-12-06 22:19:41.123574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:08.273 [2024-12-06 22:19:41.123581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:08.273 [2024-12-06 22:19:41.123601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.273 [2024-12-06 22:19:41.123614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:08.273 [2024-12-06 22:19:41.123621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:08.273 [2024-12-06 22:19:41.123627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.273 [2024-12-06 22:19:41.123639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:08.273 [2024-12-06 22:19:41.123646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:08.273 [2024-12-06 22:19:41.123652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:08.273 [2024-12-06 22:19:41.123665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:08.273 [2024-12-06 22:19:41.123685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:08.273 [2024-12-06 22:19:41.123705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:08.273 [2024-12-06 22:19:41.123724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:08.273 [2024-12-06 22:19:41.123743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:08.273 [2024-12-06 22:19:41.123761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.273 [2024-12-06 22:19:41.123774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:08.273 [2024-12-06 22:19:41.123780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:08.273 [2024-12-06 22:19:41.123787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.273 [2024-12-06 22:19:41.123793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:08.273 [2024-12-06 22:19:41.123800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:08.273 [2024-12-06 22:19:41.123806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:08.273 [2024-12-06 22:19:41.123819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:08.273 [2024-12-06 22:19:41.123825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123831] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:08.273 [2024-12-06 22:19:41.123838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:08.273 [2024-12-06 22:19:41.123846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.273 [2024-12-06 22:19:41.123860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:08.273 [2024-12-06 22:19:41.123867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:08.273 [2024-12-06 22:19:41.123874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:08.273 [2024-12-06 22:19:41.123881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:08.273 [2024-12-06 22:19:41.123887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:08.273 [2024-12-06 22:19:41.123894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:08.273 [2024-12-06 22:19:41.123903] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:08.273 [2024-12-06 22:19:41.123912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.123922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:08.273 [2024-12-06 22:19:41.123929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:08.273 [2024-12-06 22:19:41.123936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:08.273 [2024-12-06 22:19:41.123943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:08.273 [2024-12-06 22:19:41.123950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:08.273 [2024-12-06 22:19:41.123957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:08.273 [2024-12-06 22:19:41.123964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:08.273 [2024-12-06 22:19:41.123971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:08.273 [2024-12-06 22:19:41.123978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:08.273 [2024-12-06 22:19:41.123984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.123991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.123998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.124004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.124012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:08.273 [2024-12-06 22:19:41.124018] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:08.273 [2024-12-06 22:19:41.124026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.124034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:08.273 [2024-12-06 22:19:41.124040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:08.273 [2024-12-06 22:19:41.124047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:08.273 [2024-12-06 22:19:41.124054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:08.273 [2024-12-06 22:19:41.124061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.273 [2024-12-06 22:19:41.124068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:08.273 [2024-12-06 22:19:41.124076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:29:08.273 [2024-12-06 22:19:41.124082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.150050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.150101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:08.532 [2024-12-06 22:19:41.150113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.907 ms 00:29:08.532 [2024-12-06 22:19:41.150125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.150230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.150239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:08.532 [2024-12-06 22:19:41.150247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:08.532 [2024-12-06 22:19:41.150254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.194898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.194953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:08.532 [2024-12-06 22:19:41.194966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.572 ms 00:29:08.532 [2024-12-06 22:19:41.194974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.195033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.195043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:08.532 [2024-12-06 22:19:41.195055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:08.532 [2024-12-06 22:19:41.195063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.195447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.195471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:08.532 [2024-12-06 22:19:41.195481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:29:08.532 [2024-12-06 22:19:41.195489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.195619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.195633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:08.532 [2024-12-06 22:19:41.195646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:08.532 [2024-12-06 22:19:41.195654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.208739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.208781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:08.532 [2024-12-06 22:19:41.208792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.067 ms 00:29:08.532 [2024-12-06 22:19:41.208800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.221219] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:08.532 [2024-12-06 22:19:41.221264] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:08.532 [2024-12-06 22:19:41.221277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.221285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:08.532 [2024-12-06 22:19:41.221295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.369 ms 00:29:08.532 [2024-12-06 22:19:41.221302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.245702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.245762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:08.532 [2024-12-06 22:19:41.245775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.351 ms 00:29:08.532 [2024-12-06 22:19:41.245782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.258006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.258053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:08.532 [2024-12-06 22:19:41.258065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:29:08.532 [2024-12-06 22:19:41.258072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.269593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.269639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:08.532 [2024-12-06 22:19:41.269652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.477 ms 00:29:08.532 [2024-12-06 22:19:41.269659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.270316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.270341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:08.532 [2024-12-06 22:19:41.270354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:29:08.532 [2024-12-06 22:19:41.270362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.326753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.326807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:08.532 [2024-12-06 22:19:41.326826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.374 ms 00:29:08.532 [2024-12-06 22:19:41.326834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.337726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:08.532 [2024-12-06 22:19:41.340456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.340494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:08.532 [2024-12-06 22:19:41.340506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.564 ms 00:29:08.532 [2024-12-06 22:19:41.340514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.532 [2024-12-06 22:19:41.340621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.532 [2024-12-06 22:19:41.340632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:08.532 [2024-12-06 22:19:41.340643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:08.532 [2024-12-06 22:19:41.340650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.341210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.533 [2024-12-06 22:19:41.341240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:08.533 [2024-12-06 22:19:41.341249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:29:08.533 [2024-12-06 22:19:41.341256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.341279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.533 [2024-12-06 22:19:41.341288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:08.533 [2024-12-06 22:19:41.341295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:08.533 [2024-12-06 22:19:41.341303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.341337] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:08.533 [2024-12-06 22:19:41.341347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.533 [2024-12-06 22:19:41.341354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:08.533 [2024-12-06 22:19:41.341362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:08.533 [2024-12-06 22:19:41.341369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.365466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.533 [2024-12-06 22:19:41.365514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:08.533 [2024-12-06 22:19:41.365531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.079 ms 00:29:08.533 [2024-12-06 22:19:41.365541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.365613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.533 [2024-12-06 22:19:41.365623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:08.533 [2024-12-06 22:19:41.365631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:08.533 [2024-12-06 22:19:41.365638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.533 [2024-12-06 22:19:41.366557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.281 ms, result 0 00:29:09.907  [2024-12-06T22:19:43.713Z] Copying: 48/1024 [MB] (48 MBps) [2024-12-06T22:19:44.644Z] Copying: 92/1024 [MB] (44 MBps) [2024-12-06T22:19:45.576Z] Copying: 140/1024 [MB] (47 MBps) [2024-12-06T22:19:46.992Z] Copying: 187/1024 [MB] (47 MBps) [2024-12-06T22:19:47.558Z] Copying: 236/1024 [MB] (48 MBps) [2024-12-06T22:19:48.933Z] Copying: 286/1024 [MB] (50 MBps) [2024-12-06T22:19:49.867Z] Copying: 317/1024 [MB] (30 MBps) [2024-12-06T22:19:50.802Z] Copying: 345/1024 [MB] (28 MBps) [2024-12-06T22:19:51.736Z] Copying: 373/1024 [MB] (27 MBps) [2024-12-06T22:19:52.670Z] Copying: 391/1024 [MB] (18 MBps) [2024-12-06T22:19:53.604Z] Copying: 410/1024 [MB] (19 MBps) [2024-12-06T22:19:54.978Z] Copying: 433/1024 [MB] (22 MBps) [2024-12-06T22:19:55.556Z] Copying: 457/1024 [MB] (24 MBps) [2024-12-06T22:19:56.927Z] Copying: 477/1024 [MB] (19 MBps) [2024-12-06T22:19:57.859Z] Copying: 496/1024 [MB] (18 MBps) [2024-12-06T22:19:58.790Z] Copying: 524/1024 [MB] (28 MBps) [2024-12-06T22:19:59.722Z] Copying: 557/1024 [MB] (32 MBps) [2024-12-06T22:20:00.654Z] Copying: 590/1024 [MB] (32 MBps) [2024-12-06T22:20:01.589Z] Copying: 616/1024 [MB] (25 MBps) [2024-12-06T22:20:02.961Z] Copying: 635/1024 [MB] (19 MBps) [2024-12-06T22:20:03.893Z] Copying: 657/1024 [MB] (21 MBps) [2024-12-06T22:20:04.827Z] Copying: 670/1024 [MB] (13 MBps) [2024-12-06T22:20:05.761Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-06T22:20:06.691Z] Copying: 693/1024 [MB] (11 MBps) [2024-12-06T22:20:07.622Z] Copying: 726/1024 [MB] (33 MBps) [2024-12-06T22:20:08.556Z] Copying: 738/1024 [MB] (11 MBps) [2024-12-06T22:20:09.931Z] Copying: 763/1024 [MB] (24 MBps) [2024-12-06T22:20:10.865Z] Copying: 774/1024 [MB] (11 MBps) [2024-12-06T22:20:11.808Z] Copying: 801/1024 [MB] (27 MBps) [2024-12-06T22:20:12.740Z] Copying: 818/1024 [MB] (16 MBps) [2024-12-06T22:20:13.674Z] Copying: 842/1024 [MB] (24 MBps) [2024-12-06T22:20:14.666Z] Copying: 863/1024 [MB] (20 MBps) [2024-12-06T22:20:15.608Z] Copying: 901/1024 [MB] (38 MBps) [2024-12-06T22:20:16.548Z] Copying: 930/1024 [MB] (29 MBps) [2024-12-06T22:20:17.933Z] Copying: 960/1024 [MB] (29 MBps) [2024-12-06T22:20:18.879Z] Copying: 988/1024 [MB] (28 MBps) [2024-12-06T22:20:19.141Z] Copying: 1004/1024 [MB] (15 MBps) [2024-12-06T22:20:19.405Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 22:20:19.170549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.170635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.533 [2024-12-06 22:20:19.170660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:46.533 [2024-12-06 22:20:19.170676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.170719] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.533 [2024-12-06 22:20:19.174395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.174434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.533 [2024-12-06 22:20:19.174446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:29:46.533 [2024-12-06 22:20:19.174455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.174708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.174723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.533 [2024-12-06 22:20:19.174733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:29:46.533 [2024-12-06 22:20:19.174741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.179341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.179576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:46.533 [2024-12-06 22:20:19.179650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:29:46.533 [2024-12-06 22:20:19.179705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.186626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.186703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:46.533 [2024-12-06 22:20:19.186750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:29:46.533 [2024-12-06 22:20:19.186790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.210200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.210289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:46.533 [2024-12-06 22:20:19.210336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.328 ms 00:29:46.533 [2024-12-06 22:20:19.210374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.223988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.224086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:46.533 [2024-12-06 22:20:19.224138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.568 ms 00:29:46.533 [2024-12-06 22:20:19.224207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.226415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.226495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:46.533 [2024-12-06 22:20:19.226537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:29:46.533 [2024-12-06 22:20:19.226584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.249652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.249742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:46.533 [2024-12-06 22:20:19.249796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:29:46.533 [2024-12-06 22:20:19.249839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.272430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.533 [2024-12-06 22:20:19.272521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:46.533 [2024-12-06 22:20:19.272568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.546 ms 00:29:46.533 [2024-12-06 22:20:19.272609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.533 [2024-12-06 22:20:19.294731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-06 22:20:19.294757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:46.534 [2024-12-06 22:20:19.294767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.073 ms 00:29:46.534 [2024-12-06 22:20:19.294775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-06 22:20:19.316460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.534 [2024-12-06 22:20:19.316487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:46.534 [2024-12-06 22:20:19.316497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.645 ms 00:29:46.534 [2024-12-06 22:20:19.316505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.534 [2024-12-06 22:20:19.316522] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.534 [2024-12-06 22:20:19.316540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:46.534 [2024-12-06 22:20:19.316553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:46.534 [2024-12-06 22:20:19.316562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.316999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.534 [2024-12-06 22:20:19.317141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.535 [2024-12-06 22:20:19.317299] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.535 [2024-12-06 22:20:19.317306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c260cfd-9d08-4b71-afca-35499d570822 00:29:46.535 [2024-12-06 22:20:19.317314] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:46.535 [2024-12-06 22:20:19.317321] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:46.535 [2024-12-06 22:20:19.317327] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:46.535 [2024-12-06 22:20:19.317335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:46.535 [2024-12-06 22:20:19.317348] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.535 [2024-12-06 22:20:19.317356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.535 [2024-12-06 22:20:19.317363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.535 [2024-12-06 22:20:19.317369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.535 [2024-12-06 22:20:19.317376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.535 [2024-12-06 22:20:19.317383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.535 [2024-12-06 22:20:19.317390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.535 [2024-12-06 22:20:19.317398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:29:46.535 [2024-12-06 22:20:19.317407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.329614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.535 [2024-12-06 22:20:19.329642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.535 [2024-12-06 22:20:19.329653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:29:46.535 [2024-12-06 22:20:19.329661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.329997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.535 [2024-12-06 22:20:19.330015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.535 [2024-12-06 22:20:19.330024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:29:46.535 [2024-12-06 22:20:19.330031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.362090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.535 [2024-12-06 22:20:19.362131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.535 [2024-12-06 22:20:19.362142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.535 [2024-12-06 22:20:19.362149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.362225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.535 [2024-12-06 22:20:19.362238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.535 [2024-12-06 22:20:19.362246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.535 [2024-12-06 22:20:19.362253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.362313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.535 [2024-12-06 22:20:19.362322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.535 [2024-12-06 22:20:19.362330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.535 [2024-12-06 22:20:19.362337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.535 [2024-12-06 22:20:19.362352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.535 [2024-12-06 22:20:19.362359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.535 [2024-12-06 22:20:19.362369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.535 [2024-12-06 22:20:19.362377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.437902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.437947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.798 [2024-12-06 22:20:19.437958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.437965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.499696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.499746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.798 [2024-12-06 22:20:19.499757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.499765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.499829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.499838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.798 [2024-12-06 22:20:19.499846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.499854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.499886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.499895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.798 [2024-12-06 22:20:19.499902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.499912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.499994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.500004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.798 [2024-12-06 22:20:19.500012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.500020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.500048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.500057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.798 [2024-12-06 22:20:19.500064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.500072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.500106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.500114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.798 [2024-12-06 22:20:19.500122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.500129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.500167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.798 [2024-12-06 22:20:19.500199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.798 [2024-12-06 22:20:19.500207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.798 [2024-12-06 22:20:19.500217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.798 [2024-12-06 22:20:19.500321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.768 ms, result 0 00:29:47.374 00:29:47.374 00:29:47.374 22:20:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:49.918 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:49.918 Process with pid 80235 is not found 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80235 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80235 ']' 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80235 00:29:49.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80235) - No such process 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80235 is not found' 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:49.918 Remove shared memory files 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:49.918 00:29:49.918 real 3m19.602s 00:29:49.918 user 3m37.678s 00:29:49.918 sys 0m23.435s 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.918 22:20:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:49.918 ************************************ 00:29:49.918 END TEST ftl_dirty_shutdown 00:29:49.918 ************************************ 00:29:49.918 22:20:22 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:49.918 22:20:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.918 22:20:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.918 22:20:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:49.918 ************************************ 00:29:49.918 START TEST ftl_upgrade_shutdown 00:29:49.918 ************************************ 00:29:49.918 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:50.178 * Looking for test storage... 00:29:50.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.178 --rc genhtml_branch_coverage=1 00:29:50.178 --rc genhtml_function_coverage=1 00:29:50.178 --rc genhtml_legend=1 00:29:50.178 --rc geninfo_all_blocks=1 00:29:50.178 --rc geninfo_unexecuted_blocks=1 00:29:50.178 00:29:50.178 ' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.178 --rc genhtml_branch_coverage=1 00:29:50.178 --rc genhtml_function_coverage=1 00:29:50.178 --rc genhtml_legend=1 00:29:50.178 --rc geninfo_all_blocks=1 00:29:50.178 --rc geninfo_unexecuted_blocks=1 00:29:50.178 00:29:50.178 ' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.178 --rc genhtml_branch_coverage=1 00:29:50.178 --rc genhtml_function_coverage=1 00:29:50.178 --rc genhtml_legend=1 00:29:50.178 --rc geninfo_all_blocks=1 00:29:50.178 --rc geninfo_unexecuted_blocks=1 00:29:50.178 00:29:50.178 ' 00:29:50.178 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.178 --rc genhtml_branch_coverage=1 00:29:50.178 --rc genhtml_function_coverage=1 00:29:50.178 --rc genhtml_legend=1 00:29:50.178 --rc geninfo_all_blocks=1 00:29:50.179 --rc geninfo_unexecuted_blocks=1 00:29:50.179 00:29:50.179 ' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:50.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82479 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82479 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82479 ']' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.179 22:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.179 [2024-12-06 22:20:22.990766] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:29:50.179 [2024-12-06 22:20:22.990884] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82479 ] 00:29:50.438 [2024-12-06 22:20:23.150102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.438 [2024-12-06 22:20:23.246189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:51.008 22:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:51.268 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:51.528 { 00:29:51.528 "name": "basen1", 00:29:51.528 "aliases": [ 00:29:51.528 "e35958de-9224-41bc-a710-e0db74804fa7" 00:29:51.528 ], 00:29:51.528 "product_name": "NVMe disk", 00:29:51.528 "block_size": 4096, 00:29:51.528 "num_blocks": 1310720, 00:29:51.528 "uuid": "e35958de-9224-41bc-a710-e0db74804fa7", 00:29:51.528 "numa_id": -1, 00:29:51.528 "assigned_rate_limits": { 00:29:51.528 "rw_ios_per_sec": 0, 00:29:51.528 "rw_mbytes_per_sec": 0, 00:29:51.528 "r_mbytes_per_sec": 0, 00:29:51.528 "w_mbytes_per_sec": 0 00:29:51.528 }, 00:29:51.528 "claimed": true, 00:29:51.528 "claim_type": "read_many_write_one", 00:29:51.528 "zoned": false, 00:29:51.528 "supported_io_types": { 00:29:51.528 "read": true, 00:29:51.528 "write": true, 00:29:51.528 "unmap": true, 00:29:51.528 "flush": true, 00:29:51.528 "reset": true, 00:29:51.528 "nvme_admin": true, 00:29:51.528 "nvme_io": true, 00:29:51.528 "nvme_io_md": false, 00:29:51.528 "write_zeroes": true, 00:29:51.528 "zcopy": false, 00:29:51.528 "get_zone_info": false, 00:29:51.528 "zone_management": false, 00:29:51.528 "zone_append": false, 00:29:51.528 "compare": true, 00:29:51.528 "compare_and_write": false, 00:29:51.528 "abort": true, 00:29:51.528 "seek_hole": false, 00:29:51.528 "seek_data": false, 00:29:51.528 "copy": true, 00:29:51.528 "nvme_iov_md": false 00:29:51.528 }, 00:29:51.528 "driver_specific": { 00:29:51.528 "nvme": [ 00:29:51.528 { 00:29:51.528 "pci_address": "0000:00:11.0", 00:29:51.528 "trid": { 00:29:51.528 "trtype": "PCIe", 00:29:51.528 "traddr": "0000:00:11.0" 00:29:51.528 }, 00:29:51.528 "ctrlr_data": { 00:29:51.528 "cntlid": 0, 00:29:51.528 "vendor_id": "0x1b36", 00:29:51.528 "model_number": "QEMU NVMe Ctrl", 00:29:51.528 "serial_number": "12341", 00:29:51.528 "firmware_revision": "8.0.0", 00:29:51.528 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:51.528 "oacs": { 00:29:51.528 "security": 0, 00:29:51.528 "format": 1, 00:29:51.528 "firmware": 0, 00:29:51.528 "ns_manage": 1 00:29:51.528 }, 00:29:51.528 "multi_ctrlr": false, 00:29:51.528 "ana_reporting": false 00:29:51.528 }, 00:29:51.528 "vs": { 00:29:51.528 "nvme_version": "1.4" 00:29:51.528 }, 00:29:51.528 "ns_data": { 00:29:51.528 "id": 1, 00:29:51.528 "can_share": false 00:29:51.528 } 00:29:51.528 } 00:29:51.528 ], 00:29:51.528 "mp_policy": "active_passive" 00:29:51.528 } 00:29:51.528 } 00:29:51.528 ]' 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:51.528 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:51.787 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=afa271ad-9684-4714-bedc-3d5684137a4e 00:29:51.787 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:51.787 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afa271ad-9684-4714-bedc-3d5684137a4e 00:29:52.051 22:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:52.331 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1b47043f-c8a6-4ecf-b754-e56c81d4bfe8 00:29:52.331 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1b47043f-c8a6-4ecf-b754-e56c81d4bfe8 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=33c045d1-6eb9-48d2-8773-4ac1914de0ea 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 33c045d1-6eb9-48d2-8773-4ac1914de0ea ]] 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 33c045d1-6eb9-48d2-8773-4ac1914de0ea 5120 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=33c045d1-6eb9-48d2-8773-4ac1914de0ea 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 33c045d1-6eb9-48d2-8773-4ac1914de0ea 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=33c045d1-6eb9-48d2-8773-4ac1914de0ea 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33c045d1-6eb9-48d2-8773-4ac1914de0ea 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:52.606 { 00:29:52.606 "name": "33c045d1-6eb9-48d2-8773-4ac1914de0ea", 00:29:52.606 "aliases": [ 00:29:52.606 "lvs/basen1p0" 00:29:52.606 ], 00:29:52.606 "product_name": "Logical Volume", 00:29:52.606 "block_size": 4096, 00:29:52.606 "num_blocks": 5242880, 00:29:52.606 "uuid": "33c045d1-6eb9-48d2-8773-4ac1914de0ea", 00:29:52.606 "assigned_rate_limits": { 00:29:52.606 "rw_ios_per_sec": 0, 00:29:52.606 "rw_mbytes_per_sec": 0, 00:29:52.606 "r_mbytes_per_sec": 0, 00:29:52.606 "w_mbytes_per_sec": 0 00:29:52.606 }, 00:29:52.606 "claimed": false, 00:29:52.606 "zoned": false, 00:29:52.606 "supported_io_types": { 00:29:52.606 "read": true, 00:29:52.606 "write": true, 00:29:52.606 "unmap": true, 00:29:52.606 "flush": false, 00:29:52.606 "reset": true, 00:29:52.606 "nvme_admin": false, 00:29:52.606 "nvme_io": false, 00:29:52.606 "nvme_io_md": false, 00:29:52.606 "write_zeroes": true, 00:29:52.606 "zcopy": false, 00:29:52.606 "get_zone_info": false, 00:29:52.606 "zone_management": false, 00:29:52.606 "zone_append": false, 00:29:52.606 "compare": false, 00:29:52.606 "compare_and_write": false, 00:29:52.606 "abort": false, 00:29:52.606 "seek_hole": true, 00:29:52.606 "seek_data": true, 00:29:52.606 "copy": false, 00:29:52.606 "nvme_iov_md": false 00:29:52.606 }, 00:29:52.606 "driver_specific": { 00:29:52.606 "lvol": { 00:29:52.606 "lvol_store_uuid": "1b47043f-c8a6-4ecf-b754-e56c81d4bfe8", 00:29:52.606 "base_bdev": "basen1", 00:29:52.606 "thin_provision": true, 00:29:52.606 "num_allocated_clusters": 0, 00:29:52.606 "snapshot": false, 00:29:52.606 "clone": false, 00:29:52.606 "esnap_clone": false 00:29:52.606 } 00:29:52.606 } 00:29:52.606 } 00:29:52.606 ]' 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:52.606 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:52.865 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:53.124 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:53.124 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:53.124 22:20:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 33c045d1-6eb9-48d2-8773-4ac1914de0ea -c cachen1p0 --l2p_dram_limit 2 00:29:53.385 [2024-12-06 22:20:26.119773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.119821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:53.385 [2024-12-06 22:20:26.119834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:53.385 [2024-12-06 22:20:26.119840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.119888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.119896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:53.385 [2024-12-06 22:20:26.119903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:53.385 [2024-12-06 22:20:26.119909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.119926] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:53.385 [2024-12-06 22:20:26.120546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:53.385 [2024-12-06 22:20:26.120568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.120574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:53.385 [2024-12-06 22:20:26.120584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:29:53.385 [2024-12-06 22:20:26.120589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.120736] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9ed0952f-2782-4d86-97d0-8916a1d88592 00:29:53.385 [2024-12-06 22:20:26.121698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.121723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:53.385 [2024-12-06 22:20:26.121731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:53.385 [2024-12-06 22:20:26.121738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.126605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.126637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:53.385 [2024-12-06 22:20:26.126645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.835 ms 00:29:53.385 [2024-12-06 22:20:26.126653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.126684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.126692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:53.385 [2024-12-06 22:20:26.126699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:53.385 [2024-12-06 22:20:26.126707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.126749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.126758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:53.385 [2024-12-06 22:20:26.126766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:53.385 [2024-12-06 22:20:26.126773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.126790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:53.385 [2024-12-06 22:20:26.129636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.129662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:53.385 [2024-12-06 22:20:26.129673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.849 ms 00:29:53.385 [2024-12-06 22:20:26.129679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.129702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.129709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:53.385 [2024-12-06 22:20:26.129717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:53.385 [2024-12-06 22:20:26.129723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.129736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:53.385 [2024-12-06 22:20:26.129845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:53.385 [2024-12-06 22:20:26.129858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:53.385 [2024-12-06 22:20:26.129866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:53.385 [2024-12-06 22:20:26.129875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:53.385 [2024-12-06 22:20:26.129882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:53.385 [2024-12-06 22:20:26.129889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:53.385 [2024-12-06 22:20:26.129894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:53.385 [2024-12-06 22:20:26.129904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:53.385 [2024-12-06 22:20:26.129909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:53.385 [2024-12-06 22:20:26.129916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.129921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:53.385 [2024-12-06 22:20:26.129929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:29:53.385 [2024-12-06 22:20:26.129935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.385 [2024-12-06 22:20:26.130003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.385 [2024-12-06 22:20:26.130015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:53.385 [2024-12-06 22:20:26.130022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:53.386 [2024-12-06 22:20:26.130027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.386 [2024-12-06 22:20:26.130108] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:53.386 [2024-12-06 22:20:26.130115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:53.386 [2024-12-06 22:20:26.130123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:53.386 [2024-12-06 22:20:26.130141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:53.386 [2024-12-06 22:20:26.130153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:53.386 [2024-12-06 22:20:26.130159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:53.386 [2024-12-06 22:20:26.130164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:53.386 [2024-12-06 22:20:26.130190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:53.386 [2024-12-06 22:20:26.130197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:53.386 [2024-12-06 22:20:26.130210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:53.386 [2024-12-06 22:20:26.130219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:53.386 [2024-12-06 22:20:26.130233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:53.386 [2024-12-06 22:20:26.130239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:53.386 [2024-12-06 22:20:26.130251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:53.386 [2024-12-06 22:20:26.130267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:53.386 [2024-12-06 22:20:26.130285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:53.386 [2024-12-06 22:20:26.130302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:53.386 [2024-12-06 22:20:26.130320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:53.386 [2024-12-06 22:20:26.130337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:53.386 [2024-12-06 22:20:26.130356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:53.386 [2024-12-06 22:20:26.130372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:53.386 [2024-12-06 22:20:26.130378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130383] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:53.386 [2024-12-06 22:20:26.130390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:53.386 [2024-12-06 22:20:26.130396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.386 [2024-12-06 22:20:26.130410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:53.386 [2024-12-06 22:20:26.130417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:53.386 [2024-12-06 22:20:26.130423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:53.386 [2024-12-06 22:20:26.130429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:53.386 [2024-12-06 22:20:26.130434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:53.386 [2024-12-06 22:20:26.130440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:53.386 [2024-12-06 22:20:26.130446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:53.386 [2024-12-06 22:20:26.130456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:53.386 [2024-12-06 22:20:26.130469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:53.386 [2024-12-06 22:20:26.130486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:53.386 [2024-12-06 22:20:26.130493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:53.386 [2024-12-06 22:20:26.130499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:53.386 [2024-12-06 22:20:26.130506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:53.386 [2024-12-06 22:20:26.130549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:53.386 [2024-12-06 22:20:26.130556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:53.386 [2024-12-06 22:20:26.130569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:53.386 [2024-12-06 22:20:26.130574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:53.386 [2024-12-06 22:20:26.130581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:53.386 [2024-12-06 22:20:26.130586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.386 [2024-12-06 22:20:26.130593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:53.386 [2024-12-06 22:20:26.130598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:29:53.386 [2024-12-06 22:20:26.130605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.386 [2024-12-06 22:20:26.130635] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:53.386 [2024-12-06 22:20:26.130645] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:55.934 [2024-12-06 22:20:28.475560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.934 [2024-12-06 22:20:28.475629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:55.934 [2024-12-06 22:20:28.475643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2344.915 ms 00:29:55.934 [2024-12-06 22:20:28.475654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.934 [2024-12-06 22:20:28.500695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.934 [2024-12-06 22:20:28.500745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:55.934 [2024-12-06 22:20:28.500757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.837 ms 00:29:55.934 [2024-12-06 22:20:28.500767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.934 [2024-12-06 22:20:28.500844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.934 [2024-12-06 22:20:28.500856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:55.934 [2024-12-06 22:20:28.500865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:55.934 [2024-12-06 22:20:28.500881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.934 [2024-12-06 22:20:28.531189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.934 [2024-12-06 22:20:28.531234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:55.934 [2024-12-06 22:20:28.531246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.260 ms 00:29:55.934 [2024-12-06 22:20:28.531255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.934 [2024-12-06 22:20:28.531290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.531304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:55.935 [2024-12-06 22:20:28.531312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:55.935 [2024-12-06 22:20:28.531321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.531665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.531694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:55.935 [2024-12-06 22:20:28.531709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:29:55.935 [2024-12-06 22:20:28.531718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.531758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.531768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:55.935 [2024-12-06 22:20:28.531777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:55.935 [2024-12-06 22:20:28.531788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.545554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.545591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:55.935 [2024-12-06 22:20:28.545601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.748 ms 00:29:55.935 [2024-12-06 22:20:28.545610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.568079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:55.935 [2024-12-06 22:20:28.568982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.569018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:55.935 [2024-12-06 22:20:28.569033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.297 ms 00:29:55.935 [2024-12-06 22:20:28.569042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.589836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.589878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:55.935 [2024-12-06 22:20:28.589893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.752 ms 00:29:55.935 [2024-12-06 22:20:28.589901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.589986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.589998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:55.935 [2024-12-06 22:20:28.590011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:29:55.935 [2024-12-06 22:20:28.590018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.612374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.612420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:55.935 [2024-12-06 22:20:28.612433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.310 ms 00:29:55.935 [2024-12-06 22:20:28.612442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.634595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.634630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:55.935 [2024-12-06 22:20:28.634643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.112 ms 00:29:55.935 [2024-12-06 22:20:28.634650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.635211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.635233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:55.935 [2024-12-06 22:20:28.635244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:29:55.935 [2024-12-06 22:20:28.635254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.701634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.701684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:55.935 [2024-12-06 22:20:28.701701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.343 ms 00:29:55.935 [2024-12-06 22:20:28.701710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.725839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.725887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:55.935 [2024-12-06 22:20:28.725902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.054 ms 00:29:55.935 [2024-12-06 22:20:28.725909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.749087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.749132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:55.935 [2024-12-06 22:20:28.749146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.136 ms 00:29:55.935 [2024-12-06 22:20:28.749153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.772471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.772514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:55.935 [2024-12-06 22:20:28.772527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.267 ms 00:29:55.935 [2024-12-06 22:20:28.772535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.772575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.772584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:55.935 [2024-12-06 22:20:28.772600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:55.935 [2024-12-06 22:20:28.772607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.772683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:55.935 [2024-12-06 22:20:28.772695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:55.935 [2024-12-06 22:20:28.772704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:55.935 [2024-12-06 22:20:28.772711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:55.935 [2024-12-06 22:20:28.773709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2653.522 ms, result 0 00:29:55.935 { 00:29:55.935 "name": "ftl", 00:29:55.935 "uuid": "9ed0952f-2782-4d86-97d0-8916a1d88592" 00:29:55.935 } 00:29:55.935 22:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:56.198 [2024-12-06 22:20:28.972969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.198 22:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:56.459 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:56.720 [2024-12-06 22:20:29.357365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:56.720 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:56.720 [2024-12-06 22:20:29.557685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:56.720 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:57.293 Fill FTL, iteration 1 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82590 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82590 /var/tmp/spdk.tgt.sock 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82590 ']' 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:57.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.293 22:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:57.293 [2024-12-06 22:20:29.927504] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:29:57.293 [2024-12-06 22:20:29.927601] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82590 ] 00:29:57.293 [2024-12-06 22:20:30.081769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.556 [2024-12-06 22:20:30.180776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.128 22:20:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.128 22:20:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:58.128 22:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:58.390 ftln1 00:29:58.390 22:20:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:58.390 22:20:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82590 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82590 ']' 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82590 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82590 00:29:58.651 killing process with pid 82590 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82590' 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82590 00:29:58.651 22:20:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82590 00:30:00.028 22:20:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:00.029 22:20:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:00.029 [2024-12-06 22:20:32.606204] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:00.029 [2024-12-06 22:20:32.606327] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:30:00.029 [2024-12-06 22:20:32.761854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.029 [2024-12-06 22:20:32.843558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.403  [2024-12-06T22:20:35.264Z] Copying: 256/1024 [MB] (256 MBps) [2024-12-06T22:20:36.197Z] Copying: 518/1024 [MB] (262 MBps) [2024-12-06T22:20:37.128Z] Copying: 783/1024 [MB] (265 MBps) [2024-12-06T22:20:37.693Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:30:04.821 00:30:04.821 Calculate MD5 checksum, iteration 1 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.821 22:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:05.078 [2024-12-06 22:20:37.753456] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:05.079 [2024-12-06 22:20:37.753797] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82690 ] 00:30:05.079 [2024-12-06 22:20:37.909908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.337 [2024-12-06 22:20:37.989113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.710  [2024-12-06T22:20:39.841Z] Copying: 682/1024 [MB] (682 MBps) [2024-12-06T22:20:40.407Z] Copying: 1024/1024 [MB] (average 683 MBps) 00:30:07.535 00:30:07.535 22:20:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:07.535 22:20:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:10.064 Fill FTL, iteration 2 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=67841fe7d1ff444b9d860b7153493313 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.064 22:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:10.064 [2024-12-06 22:20:42.415682] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:10.064 [2024-12-06 22:20:42.415801] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82747 ] 00:30:10.064 [2024-12-06 22:20:42.570406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.064 [2024-12-06 22:20:42.651776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.434  [2024-12-06T22:20:45.237Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-06T22:20:46.168Z] Copying: 519/1024 [MB] (260 MBps) [2024-12-06T22:20:47.100Z] Copying: 754/1024 [MB] (235 MBps) [2024-12-06T22:20:47.100Z] Copying: 1014/1024 [MB] (260 MBps) [2024-12-06T22:20:47.683Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:30:14.811 00:30:14.811 Calculate MD5 checksum, iteration 2 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:14.811 22:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:15.068 [2024-12-06 22:20:47.687913] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:15.068 [2024-12-06 22:20:47.688039] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82800 ] 00:30:15.068 [2024-12-06 22:20:47.845737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.068 [2024-12-06 22:20:47.928746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.964  [2024-12-06T22:20:50.094Z] Copying: 661/1024 [MB] (661 MBps) [2024-12-06T22:20:51.029Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:30:18.157 00:30:18.157 22:20:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:18.157 22:20:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:20.059 22:20:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:20.317 22:20:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=89d1936ff803f5e16edeacae66c843a3 00:30:20.317 22:20:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:20.317 22:20:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:20.317 22:20:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:20.317 [2024-12-06 22:20:53.116017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.317 [2024-12-06 22:20:53.116063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:20.317 [2024-12-06 22:20:53.116075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:20.317 [2024-12-06 22:20:53.116082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.317 [2024-12-06 22:20:53.116102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.317 [2024-12-06 22:20:53.116111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:20.317 [2024-12-06 22:20:53.116118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.317 [2024-12-06 22:20:53.116125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.317 [2024-12-06 22:20:53.116141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.317 [2024-12-06 22:20:53.116147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:20.317 [2024-12-06 22:20:53.116154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:20.317 [2024-12-06 22:20:53.116159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.317 [2024-12-06 22:20:53.116224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.199 ms, result 0 00:30:20.317 true 00:30:20.317 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.575 { 00:30:20.575 "name": "ftl", 00:30:20.575 "properties": [ 00:30:20.575 { 00:30:20.575 "name": "superblock_version", 00:30:20.575 "value": 5, 00:30:20.575 "read-only": true 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "name": "base_device", 00:30:20.575 "bands": [ 00:30:20.575 { 00:30:20.575 "id": 0, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 1, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 2, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 3, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 4, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 5, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 6, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 7, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 8, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 9, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 10, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 11, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 12, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 13, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 14, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 15, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 16, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 17, 00:30:20.575 "state": "FREE", 00:30:20.575 "validity": 0.0 00:30:20.575 } 00:30:20.575 ], 00:30:20.575 "read-only": true 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "name": "cache_device", 00:30:20.575 "type": "bdev", 00:30:20.575 "chunks": [ 00:30:20.575 { 00:30:20.575 "id": 0, 00:30:20.575 "state": "INACTIVE", 00:30:20.575 "utilization": 0.0 00:30:20.575 }, 00:30:20.575 { 00:30:20.575 "id": 1, 00:30:20.576 "state": "CLOSED", 00:30:20.576 "utilization": 1.0 00:30:20.576 }, 00:30:20.576 { 00:30:20.576 "id": 2, 00:30:20.576 "state": "CLOSED", 00:30:20.576 "utilization": 1.0 00:30:20.576 }, 00:30:20.576 { 00:30:20.576 "id": 3, 00:30:20.576 "state": "OPEN", 00:30:20.576 "utilization": 0.001953125 00:30:20.576 }, 00:30:20.576 { 00:30:20.576 "id": 4, 00:30:20.576 "state": "OPEN", 00:30:20.576 "utilization": 0.0 00:30:20.576 } 00:30:20.576 ], 00:30:20.576 "read-only": true 00:30:20.576 }, 00:30:20.576 { 00:30:20.576 "name": "verbose_mode", 00:30:20.576 "value": true, 00:30:20.576 "unit": "", 00:30:20.576 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:20.576 }, 00:30:20.576 { 00:30:20.576 "name": "prep_upgrade_on_shutdown", 00:30:20.576 "value": false, 00:30:20.576 "unit": "", 00:30:20.576 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:20.576 } 00:30:20.576 ] 00:30:20.576 } 00:30:20.576 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:20.833 [2024-12-06 22:20:53.488324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.833 [2024-12-06 22:20:53.488367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:20.833 [2024-12-06 22:20:53.488377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:20.833 [2024-12-06 22:20:53.488383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.833 [2024-12-06 22:20:53.488401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.833 [2024-12-06 22:20:53.488407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:20.833 [2024-12-06 22:20:53.488414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.833 [2024-12-06 22:20:53.488419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.833 [2024-12-06 22:20:53.488434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.833 [2024-12-06 22:20:53.488440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:20.833 [2024-12-06 22:20:53.488445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:20.833 [2024-12-06 22:20:53.488451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.833 [2024-12-06 22:20:53.488495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.165 ms, result 0 00:30:20.833 true 00:30:20.833 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:20.833 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:20.833 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:21.144 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:21.144 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:21.144 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:21.144 [2024-12-06 22:20:53.888688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.144 [2024-12-06 22:20:53.888732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:21.144 [2024-12-06 22:20:53.888742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:21.144 [2024-12-06 22:20:53.888748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.144 [2024-12-06 22:20:53.888764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.144 [2024-12-06 22:20:53.888771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:21.144 [2024-12-06 22:20:53.888778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:21.144 [2024-12-06 22:20:53.888784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.144 [2024-12-06 22:20:53.888799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.144 [2024-12-06 22:20:53.888805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:21.144 [2024-12-06 22:20:53.888811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:21.144 [2024-12-06 22:20:53.888816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.144 [2024-12-06 22:20:53.888861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.164 ms, result 0 00:30:21.144 true 00:30:21.144 22:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:21.403 { 00:30:21.403 "name": "ftl", 00:30:21.403 "properties": [ 00:30:21.403 { 00:30:21.403 "name": "superblock_version", 00:30:21.403 "value": 5, 00:30:21.403 "read-only": true 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "name": "base_device", 00:30:21.403 "bands": [ 00:30:21.403 { 00:30:21.403 "id": 0, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 1, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 2, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 3, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 4, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 5, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 6, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 7, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 8, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 9, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 10, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 11, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 12, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 13, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 14, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 15, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 16, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 17, 00:30:21.403 "state": "FREE", 00:30:21.403 "validity": 0.0 00:30:21.403 } 00:30:21.403 ], 00:30:21.403 "read-only": true 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "name": "cache_device", 00:30:21.403 "type": "bdev", 00:30:21.403 "chunks": [ 00:30:21.403 { 00:30:21.403 "id": 0, 00:30:21.403 "state": "INACTIVE", 00:30:21.403 "utilization": 0.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 1, 00:30:21.403 "state": "CLOSED", 00:30:21.403 "utilization": 1.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 2, 00:30:21.403 "state": "CLOSED", 00:30:21.403 "utilization": 1.0 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 3, 00:30:21.403 "state": "OPEN", 00:30:21.403 "utilization": 0.001953125 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "id": 4, 00:30:21.403 "state": "OPEN", 00:30:21.403 "utilization": 0.0 00:30:21.403 } 00:30:21.403 ], 00:30:21.403 "read-only": true 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "name": "verbose_mode", 00:30:21.403 "value": true, 00:30:21.403 "unit": "", 00:30:21.403 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:21.403 }, 00:30:21.403 { 00:30:21.403 "name": "prep_upgrade_on_shutdown", 00:30:21.403 "value": true, 00:30:21.403 "unit": "", 00:30:21.403 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:21.403 } 00:30:21.403 ] 00:30:21.403 } 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82479 ]] 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82479 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82479 ']' 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82479 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82479 00:30:21.403 killing process with pid 82479 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82479' 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82479 00:30:21.403 22:20:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82479 00:30:21.970 [2024-12-06 22:20:54.626951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:21.970 [2024-12-06 22:20:54.637507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.970 [2024-12-06 22:20:54.637554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:21.970 [2024-12-06 22:20:54.637565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:21.970 [2024-12-06 22:20:54.637571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.970 [2024-12-06 22:20:54.637590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:21.970 [2024-12-06 22:20:54.639666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.970 [2024-12-06 22:20:54.639696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:21.970 [2024-12-06 22:20:54.639705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.065 ms 00:30:21.970 [2024-12-06 22:20:54.639711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.006979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.007044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:30.082 [2024-12-06 22:21:02.007060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7367.211 ms 00:30:30.082 [2024-12-06 22:21:02.007066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.008031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.008050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:30.082 [2024-12-06 22:21:02.008058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.951 ms 00:30:30.082 [2024-12-06 22:21:02.008064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.008961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.008984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:30.082 [2024-12-06 22:21:02.008992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.876 ms 00:30:30.082 [2024-12-06 22:21:02.009003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.017001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.017048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:30.082 [2024-12-06 22:21:02.017056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.952 ms 00:30:30.082 [2024-12-06 22:21:02.017063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.022636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.022681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:30.082 [2024-12-06 22:21:02.022691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.540 ms 00:30:30.082 [2024-12-06 22:21:02.022697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.082 [2024-12-06 22:21:02.022762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.082 [2024-12-06 22:21:02.022776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:30.082 [2024-12-06 22:21:02.022783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:30.083 [2024-12-06 22:21:02.022789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.030063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.030105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:30.083 [2024-12-06 22:21:02.030114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.260 ms 00:30:30.083 [2024-12-06 22:21:02.030121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.037296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.037338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:30.083 [2024-12-06 22:21:02.037346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.145 ms 00:30:30.083 [2024-12-06 22:21:02.037352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.044531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.044574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:30.083 [2024-12-06 22:21:02.044583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.150 ms 00:30:30.083 [2024-12-06 22:21:02.044589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.051609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.051654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:30.083 [2024-12-06 22:21:02.051662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.959 ms 00:30:30.083 [2024-12-06 22:21:02.051668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.051698] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:30.083 [2024-12-06 22:21:02.051723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:30.083 [2024-12-06 22:21:02.051731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:30.083 [2024-12-06 22:21:02.051738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:30.083 [2024-12-06 22:21:02.051745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:30.083 [2024-12-06 22:21:02.051837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:30.083 [2024-12-06 22:21:02.051843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ed0952f-2782-4d86-97d0-8916a1d88592 00:30:30.083 [2024-12-06 22:21:02.051849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:30.083 [2024-12-06 22:21:02.051854] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:30.083 [2024-12-06 22:21:02.051860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:30.083 [2024-12-06 22:21:02.051866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:30.083 [2024-12-06 22:21:02.051874] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:30.083 [2024-12-06 22:21:02.051880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:30.083 [2024-12-06 22:21:02.051888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:30.083 [2024-12-06 22:21:02.051893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:30.083 [2024-12-06 22:21:02.051898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:30.083 [2024-12-06 22:21:02.051903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.051910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:30.083 [2024-12-06 22:21:02.051917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:30:30.083 [2024-12-06 22:21:02.051923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.061951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.061992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:30.083 [2024-12-06 22:21:02.062007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.997 ms 00:30:30.083 [2024-12-06 22:21:02.062013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.062306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.083 [2024-12-06 22:21:02.062320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:30.083 [2024-12-06 22:21:02.062327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:30:30.083 [2024-12-06 22:21:02.062333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.094919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.094967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:30.083 [2024-12-06 22:21:02.094976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.094983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.095016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.095023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:30.083 [2024-12-06 22:21:02.095029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.095034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.095112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.095120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:30.083 [2024-12-06 22:21:02.095129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.095135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.095148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.095155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:30.083 [2024-12-06 22:21:02.095161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.095166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.155451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.155502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:30.083 [2024-12-06 22:21:02.155518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.155524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:30.083 [2024-12-06 22:21:02.204193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:30.083 [2024-12-06 22:21:02.204282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:30.083 [2024-12-06 22:21:02.204354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:30.083 [2024-12-06 22:21:02.204449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:30.083 [2024-12-06 22:21:02.204495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:30.083 [2024-12-06 22:21:02.204545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.083 [2024-12-06 22:21:02.204588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.083 [2024-12-06 22:21:02.204596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:30.083 [2024-12-06 22:21:02.204602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.083 [2024-12-06 22:21:02.204608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.084 [2024-12-06 22:21:02.204702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7567.150 ms, result 0 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82979 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82979 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82979 ']' 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.375 22:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:33.375 [2024-12-06 22:21:05.746031] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:33.375 [2024-12-06 22:21:05.746154] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82979 ] 00:30:33.375 [2024-12-06 22:21:05.902862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.375 [2024-12-06 22:21:05.979996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.943 [2024-12-06 22:21:06.552981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.943 [2024-12-06 22:21:06.553041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:33.943 [2024-12-06 22:21:06.695950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.695995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:33.943 [2024-12-06 22:21:06.696006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.943 [2024-12-06 22:21:06.696012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.696053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.696061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:33.943 [2024-12-06 22:21:06.696068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:33.943 [2024-12-06 22:21:06.696073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.696091] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:33.943 [2024-12-06 22:21:06.696650] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:33.943 [2024-12-06 22:21:06.696670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.696676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:33.943 [2024-12-06 22:21:06.696683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:30:33.943 [2024-12-06 22:21:06.696689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.697651] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:33.943 [2024-12-06 22:21:06.707237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.707270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:33.943 [2024-12-06 22:21:06.707283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.588 ms 00:30:33.943 [2024-12-06 22:21:06.707289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.707334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.707342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:33.943 [2024-12-06 22:21:06.707348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:33.943 [2024-12-06 22:21:06.707354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.711733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.711761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:33.943 [2024-12-06 22:21:06.711769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.330 ms 00:30:33.943 [2024-12-06 22:21:06.711774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.711817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.711825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:33.943 [2024-12-06 22:21:06.711831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:33.943 [2024-12-06 22:21:06.711837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.711872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.711882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:33.943 [2024-12-06 22:21:06.711889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:33.943 [2024-12-06 22:21:06.711894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.711911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:33.943 [2024-12-06 22:21:06.714514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.714540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:33.943 [2024-12-06 22:21:06.714547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.608 ms 00:30:33.943 [2024-12-06 22:21:06.714555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.714579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.714586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:33.943 [2024-12-06 22:21:06.714592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.943 [2024-12-06 22:21:06.714597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.714612] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:33.943 [2024-12-06 22:21:06.714629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:33.943 [2024-12-06 22:21:06.714656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:33.943 [2024-12-06 22:21:06.714667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:33.943 [2024-12-06 22:21:06.714747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:33.943 [2024-12-06 22:21:06.714756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:33.943 [2024-12-06 22:21:06.714764] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:33.943 [2024-12-06 22:21:06.714772] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:33.943 [2024-12-06 22:21:06.714779] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:33.943 [2024-12-06 22:21:06.714787] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:33.943 [2024-12-06 22:21:06.714792] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:33.943 [2024-12-06 22:21:06.714798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:33.943 [2024-12-06 22:21:06.714804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:33.943 [2024-12-06 22:21:06.714810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.714815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:33.943 [2024-12-06 22:21:06.714821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:30:33.943 [2024-12-06 22:21:06.714827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.714891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.943 [2024-12-06 22:21:06.714917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:33.943 [2024-12-06 22:21:06.714925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:33.943 [2024-12-06 22:21:06.714930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.943 [2024-12-06 22:21:06.715006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:33.943 [2024-12-06 22:21:06.715019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:33.943 [2024-12-06 22:21:06.715025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.943 [2024-12-06 22:21:06.715031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.943 [2024-12-06 22:21:06.715037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:33.943 [2024-12-06 22:21:06.715042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:33.943 [2024-12-06 22:21:06.715047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:33.943 [2024-12-06 22:21:06.715053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:33.943 [2024-12-06 22:21:06.715058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:33.943 [2024-12-06 22:21:06.715064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.943 [2024-12-06 22:21:06.715069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:33.944 [2024-12-06 22:21:06.715074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:33.944 [2024-12-06 22:21:06.715079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:33.944 [2024-12-06 22:21:06.715089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:33.944 [2024-12-06 22:21:06.715098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:33.944 [2024-12-06 22:21:06.715109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:33.944 [2024-12-06 22:21:06.715114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:33.944 [2024-12-06 22:21:06.715124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:33.944 [2024-12-06 22:21:06.715143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:33.944 [2024-12-06 22:21:06.715158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:33.944 [2024-12-06 22:21:06.715182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:33.944 [2024-12-06 22:21:06.715197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:33.944 [2024-12-06 22:21:06.715212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:33.944 [2024-12-06 22:21:06.715227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:33.944 [2024-12-06 22:21:06.715242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:33.944 [2024-12-06 22:21:06.715247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:33.944 [2024-12-06 22:21:06.715257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:33.944 [2024-12-06 22:21:06.715262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:33.944 [2024-12-06 22:21:06.715276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:33.944 [2024-12-06 22:21:06.715282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:33.944 [2024-12-06 22:21:06.715287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:33.944 [2024-12-06 22:21:06.715292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:33.944 [2024-12-06 22:21:06.715297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:33.944 [2024-12-06 22:21:06.715303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:33.944 [2024-12-06 22:21:06.715309] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:33.944 [2024-12-06 22:21:06.715316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:33.944 [2024-12-06 22:21:06.715328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:33.944 [2024-12-06 22:21:06.715345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:33.944 [2024-12-06 22:21:06.715350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:33.944 [2024-12-06 22:21:06.715355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:33.944 [2024-12-06 22:21:06.715360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:33.944 [2024-12-06 22:21:06.715397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:33.944 [2024-12-06 22:21:06.715403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:33.944 [2024-12-06 22:21:06.715414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:33.944 [2024-12-06 22:21:06.715420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:33.944 [2024-12-06 22:21:06.715425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:33.944 [2024-12-06 22:21:06.715430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.944 [2024-12-06 22:21:06.715436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:33.944 [2024-12-06 22:21:06.715441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:30:33.944 [2024-12-06 22:21:06.715446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.944 [2024-12-06 22:21:06.715479] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:33.944 [2024-12-06 22:21:06.715491] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:36.480 [2024-12-06 22:21:09.220084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.220142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:36.480 [2024-12-06 22:21:09.220157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2504.593 ms 00:30:36.480 [2024-12-06 22:21:09.220165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.245546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.245589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:36.480 [2024-12-06 22:21:09.245600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.165 ms 00:30:36.480 [2024-12-06 22:21:09.245609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.245681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.245696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:36.480 [2024-12-06 22:21:09.245705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:36.480 [2024-12-06 22:21:09.245712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.275879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.275919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:36.480 [2024-12-06 22:21:09.275933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.130 ms 00:30:36.480 [2024-12-06 22:21:09.275941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.275968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.275977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:36.480 [2024-12-06 22:21:09.275985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:36.480 [2024-12-06 22:21:09.275992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.276375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.276438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:36.480 [2024-12-06 22:21:09.276447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:30:36.480 [2024-12-06 22:21:09.276455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.276496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.276510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:36.480 [2024-12-06 22:21:09.276518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:36.480 [2024-12-06 22:21:09.276526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.290587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.290620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:36.480 [2024-12-06 22:21:09.290631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.040 ms 00:30:36.480 [2024-12-06 22:21:09.290639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.317387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:36.480 [2024-12-06 22:21:09.317429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:36.480 [2024-12-06 22:21:09.317442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.317451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:36.480 [2024-12-06 22:21:09.317461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.696 ms 00:30:36.480 [2024-12-06 22:21:09.317468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.331219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.331265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:36.480 [2024-12-06 22:21:09.331276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.704 ms 00:30:36.480 [2024-12-06 22:21:09.331283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.480 [2024-12-06 22:21:09.342674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.480 [2024-12-06 22:21:09.342707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:36.480 [2024-12-06 22:21:09.342717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.353 ms 00:30:36.480 [2024-12-06 22:21:09.342724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.354283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.354316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:36.738 [2024-12-06 22:21:09.354326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.524 ms 00:30:36.738 [2024-12-06 22:21:09.354333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.354930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.354954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:36.738 [2024-12-06 22:21:09.354963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:30:36.738 [2024-12-06 22:21:09.354970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.410524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.410575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:36.738 [2024-12-06 22:21:09.410588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.534 ms 00:30:36.738 [2024-12-06 22:21:09.410596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.421110] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:36.738 [2024-12-06 22:21:09.421798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.421828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:36.738 [2024-12-06 22:21:09.421838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.160 ms 00:30:36.738 [2024-12-06 22:21:09.421846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.421932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.421945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:36.738 [2024-12-06 22:21:09.421954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:36.738 [2024-12-06 22:21:09.421961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.422013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.422024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:36.738 [2024-12-06 22:21:09.422032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:36.738 [2024-12-06 22:21:09.422040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.422060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.422068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:36.738 [2024-12-06 22:21:09.422079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:36.738 [2024-12-06 22:21:09.422086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.422116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:36.738 [2024-12-06 22:21:09.422126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.422134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:36.738 [2024-12-06 22:21:09.422141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:36.738 [2024-12-06 22:21:09.422148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.445240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.445278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:36.738 [2024-12-06 22:21:09.445288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.073 ms 00:30:36.738 [2024-12-06 22:21:09.445295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.738 [2024-12-06 22:21:09.445362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.738 [2024-12-06 22:21:09.445371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:36.738 [2024-12-06 22:21:09.445379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:36.739 [2024-12-06 22:21:09.445387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.739 [2024-12-06 22:21:09.446326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2749.951 ms, result 0 00:30:36.739 [2024-12-06 22:21:09.461574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.739 [2024-12-06 22:21:09.477568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:36.739 [2024-12-06 22:21:09.485680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:37.308 22:21:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.308 22:21:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:37.308 22:21:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:37.308 22:21:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:37.308 22:21:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:37.567 [2024-12-06 22:21:10.186368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.567 [2024-12-06 22:21:10.186432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:37.567 [2024-12-06 22:21:10.186452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:37.567 [2024-12-06 22:21:10.186462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.568 [2024-12-06 22:21:10.186491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.568 [2024-12-06 22:21:10.186501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:37.568 [2024-12-06 22:21:10.186510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:37.568 [2024-12-06 22:21:10.186519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.568 [2024-12-06 22:21:10.186540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.568 [2024-12-06 22:21:10.186549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:37.568 [2024-12-06 22:21:10.186559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:37.568 [2024-12-06 22:21:10.186567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.568 [2024-12-06 22:21:10.186637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.263 ms, result 0 00:30:37.568 true 00:30:37.568 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:37.568 { 00:30:37.568 "name": "ftl", 00:30:37.568 "properties": [ 00:30:37.568 { 00:30:37.568 "name": "superblock_version", 00:30:37.568 "value": 5, 00:30:37.568 "read-only": true 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "name": "base_device", 00:30:37.568 "bands": [ 00:30:37.568 { 00:30:37.568 "id": 0, 00:30:37.568 "state": "CLOSED", 00:30:37.568 "validity": 1.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 1, 00:30:37.568 "state": "CLOSED", 00:30:37.568 "validity": 1.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 2, 00:30:37.568 "state": "CLOSED", 00:30:37.568 "validity": 0.007843137254901933 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 3, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 4, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 5, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 6, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 7, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 8, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 9, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 10, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 11, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 12, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 13, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 14, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 15, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 16, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 17, 00:30:37.568 "state": "FREE", 00:30:37.568 "validity": 0.0 00:30:37.568 } 00:30:37.568 ], 00:30:37.568 "read-only": true 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "name": "cache_device", 00:30:37.568 "type": "bdev", 00:30:37.568 "chunks": [ 00:30:37.568 { 00:30:37.568 "id": 0, 00:30:37.568 "state": "INACTIVE", 00:30:37.568 "utilization": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 1, 00:30:37.568 "state": "OPEN", 00:30:37.568 "utilization": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 2, 00:30:37.568 "state": "OPEN", 00:30:37.568 "utilization": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 3, 00:30:37.568 "state": "FREE", 00:30:37.568 "utilization": 0.0 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "id": 4, 00:30:37.568 "state": "FREE", 00:30:37.568 "utilization": 0.0 00:30:37.568 } 00:30:37.568 ], 00:30:37.568 "read-only": true 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "name": "verbose_mode", 00:30:37.568 "value": true, 00:30:37.568 "unit": "", 00:30:37.568 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:37.568 }, 00:30:37.568 { 00:30:37.568 "name": "prep_upgrade_on_shutdown", 00:30:37.568 "value": false, 00:30:37.568 "unit": "", 00:30:37.568 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:37.568 } 00:30:37.568 ] 00:30:37.568 } 00:30:37.568 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:37.568 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:37.568 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:37.827 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:37.827 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:37.827 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:37.827 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:37.827 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:38.087 Validate MD5 checksum, iteration 1 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:38.087 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:38.088 22:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.088 [2024-12-06 22:21:10.943236] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:38.088 [2024-12-06 22:21:10.943383] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83048 ] 00:30:38.360 [2024-12-06 22:21:11.105863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.619 [2024-12-06 22:21:11.238599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.992  [2024-12-06T22:21:13.430Z] Copying: 638/1024 [MB] (638 MBps) [2024-12-06T22:21:14.804Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:30:41.932 00:30:41.932 22:21:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:41.932 22:21:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:43.879 Validate MD5 checksum, iteration 2 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=67841fe7d1ff444b9d860b7153493313 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 67841fe7d1ff444b9d860b7153493313 != \6\7\8\4\1\f\e\7\d\1\f\f\4\4\4\b\9\d\8\6\0\b\7\1\5\3\4\9\3\3\1\3 ]] 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.879 22:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.879 [2024-12-06 22:21:16.703288] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:43.879 [2024-12-06 22:21:16.703401] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83115 ] 00:30:44.136 [2024-12-06 22:21:16.862682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.136 [2024-12-06 22:21:16.956821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.036  [2024-12-06T22:21:19.165Z] Copying: 672/1024 [MB] (672 MBps) [2024-12-06T22:21:25.732Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:30:52.860 00:30:52.860 22:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:52.860 22:21:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:54.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=89d1936ff803f5e16edeacae66c843a3 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 89d1936ff803f5e16edeacae66c843a3 != \8\9\d\1\9\3\6\f\f\8\0\3\f\5\e\1\6\e\d\e\a\c\a\e\6\6\c\8\4\3\a\3 ]] 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82979 ]] 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82979 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83223 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83223 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83223 ']' 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.232 22:21:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:54.232 [2024-12-06 22:21:26.667067] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:54.232 [2024-12-06 22:21:26.667199] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83223 ] 00:30:54.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 82979 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:54.232 [2024-12-06 22:21:26.822399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.232 [2024-12-06 22:21:26.899969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.796 [2024-12-06 22:21:27.470885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:54.796 [2024-12-06 22:21:27.470943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:54.796 [2024-12-06 22:21:27.613834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.613875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:54.796 [2024-12-06 22:21:27.613885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:54.796 [2024-12-06 22:21:27.613891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.613930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.613938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:54.796 [2024-12-06 22:21:27.613945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:54.796 [2024-12-06 22:21:27.613950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.613967] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:54.796 [2024-12-06 22:21:27.614500] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:54.796 [2024-12-06 22:21:27.614517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.614524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:54.796 [2024-12-06 22:21:27.614530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.556 ms 00:30:54.796 [2024-12-06 22:21:27.614536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.614797] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:54.796 [2024-12-06 22:21:27.627211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.627242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:54.796 [2024-12-06 22:21:27.627252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.414 ms 00:30:54.796 [2024-12-06 22:21:27.627259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.633944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.633973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:54.796 [2024-12-06 22:21:27.633981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:54.796 [2024-12-06 22:21:27.633986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.634232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.634246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:54.796 [2024-12-06 22:21:27.634253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:30:54.796 [2024-12-06 22:21:27.634259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.634297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.634305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:54.796 [2024-12-06 22:21:27.634311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:54.796 [2024-12-06 22:21:27.634317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.634336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.634343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:54.796 [2024-12-06 22:21:27.634348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:54.796 [2024-12-06 22:21:27.634354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.634368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:54.796 [2024-12-06 22:21:27.636483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.636515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:54.796 [2024-12-06 22:21:27.636522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.118 ms 00:30:54.796 [2024-12-06 22:21:27.636528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.636550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.796 [2024-12-06 22:21:27.636556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:54.796 [2024-12-06 22:21:27.636562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:54.796 [2024-12-06 22:21:27.636569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.796 [2024-12-06 22:21:27.636585] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:54.796 [2024-12-06 22:21:27.636600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:54.796 [2024-12-06 22:21:27.636626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:54.796 [2024-12-06 22:21:27.636639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:54.796 [2024-12-06 22:21:27.636719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:54.796 [2024-12-06 22:21:27.636727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:54.796 [2024-12-06 22:21:27.636734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:54.796 [2024-12-06 22:21:27.636742] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:54.796 [2024-12-06 22:21:27.636748] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:54.796 [2024-12-06 22:21:27.636755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:54.796 [2024-12-06 22:21:27.636760] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:54.797 [2024-12-06 22:21:27.636766] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:54.797 [2024-12-06 22:21:27.636771] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:54.797 [2024-12-06 22:21:27.636780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.797 [2024-12-06 22:21:27.636785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:54.797 [2024-12-06 22:21:27.636791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:30:54.797 [2024-12-06 22:21:27.636796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.797 [2024-12-06 22:21:27.636861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.797 [2024-12-06 22:21:27.636867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:54.797 [2024-12-06 22:21:27.636872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:54.797 [2024-12-06 22:21:27.636878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.797 [2024-12-06 22:21:27.636952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:54.797 [2024-12-06 22:21:27.636967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:54.797 [2024-12-06 22:21:27.636974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:54.797 [2024-12-06 22:21:27.636980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.636986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:54.797 [2024-12-06 22:21:27.636991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.636997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:54.797 [2024-12-06 22:21:27.637002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:54.797 [2024-12-06 22:21:27.637007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:54.797 [2024-12-06 22:21:27.637011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:54.797 [2024-12-06 22:21:27.637022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:54.797 [2024-12-06 22:21:27.637028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:54.797 [2024-12-06 22:21:27.637039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:54.797 [2024-12-06 22:21:27.637044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:54.797 [2024-12-06 22:21:27.637054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:54.797 [2024-12-06 22:21:27.637059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:54.797 [2024-12-06 22:21:27.637069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:54.797 [2024-12-06 22:21:27.637088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:54.797 [2024-12-06 22:21:27.637104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:54.797 [2024-12-06 22:21:27.637118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:54.797 [2024-12-06 22:21:27.637133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:54.797 [2024-12-06 22:21:27.637147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:54.797 [2024-12-06 22:21:27.637162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:54.797 [2024-12-06 22:21:27.637191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:54.797 [2024-12-06 22:21:27.637196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637201] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:54.797 [2024-12-06 22:21:27.637208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:54.797 [2024-12-06 22:21:27.637213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:54.797 [2024-12-06 22:21:27.637225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:54.797 [2024-12-06 22:21:27.637231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:54.797 [2024-12-06 22:21:27.637235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:54.797 [2024-12-06 22:21:27.637241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:54.797 [2024-12-06 22:21:27.637245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:54.797 [2024-12-06 22:21:27.637251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:54.797 [2024-12-06 22:21:27.637258] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:54.797 [2024-12-06 22:21:27.637264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:54.797 [2024-12-06 22:21:27.637276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:54.797 [2024-12-06 22:21:27.637292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:54.797 [2024-12-06 22:21:27.637297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:54.797 [2024-12-06 22:21:27.637302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:54.797 [2024-12-06 22:21:27.637307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:54.797 [2024-12-06 22:21:27.637345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:54.797 [2024-12-06 22:21:27.637351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:54.797 [2024-12-06 22:21:27.637363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:54.797 [2024-12-06 22:21:27.637369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:54.797 [2024-12-06 22:21:27.637374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:54.797 [2024-12-06 22:21:27.637379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.797 [2024-12-06 22:21:27.637388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:54.797 [2024-12-06 22:21:27.637393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:30:54.797 [2024-12-06 22:21:27.637399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.797 [2024-12-06 22:21:27.656271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.797 [2024-12-06 22:21:27.656296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:54.797 [2024-12-06 22:21:27.656304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.835 ms 00:30:54.797 [2024-12-06 22:21:27.656309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.797 [2024-12-06 22:21:27.656343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.797 [2024-12-06 22:21:27.656349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:54.797 [2024-12-06 22:21:27.656356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:54.797 [2024-12-06 22:21:27.656361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.055 [2024-12-06 22:21:27.680015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.055 [2024-12-06 22:21:27.680043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:55.055 [2024-12-06 22:21:27.680050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.615 ms 00:30:55.055 [2024-12-06 22:21:27.680056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.055 [2024-12-06 22:21:27.680075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.055 [2024-12-06 22:21:27.680081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:55.055 [2024-12-06 22:21:27.680088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:55.055 [2024-12-06 22:21:27.680096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.055 [2024-12-06 22:21:27.680167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.055 [2024-12-06 22:21:27.680184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:55.055 [2024-12-06 22:21:27.680191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:55.055 [2024-12-06 22:21:27.680197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.055 [2024-12-06 22:21:27.680225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.055 [2024-12-06 22:21:27.680232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:55.055 [2024-12-06 22:21:27.680238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:55.055 [2024-12-06 22:21:27.680243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.691502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.691526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:55.056 [2024-12-06 22:21:27.691534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.239 ms 00:30:55.056 [2024-12-06 22:21:27.691540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.691610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.691619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:55.056 [2024-12-06 22:21:27.691625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:55.056 [2024-12-06 22:21:27.691630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.727018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.727056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:55.056 [2024-12-06 22:21:27.727069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.372 ms 00:30:55.056 [2024-12-06 22:21:27.727077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.736185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.736214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:55.056 [2024-12-06 22:21:27.736230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:30:55.056 [2024-12-06 22:21:27.736238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.790483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.790533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:55.056 [2024-12-06 22:21:27.790544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.191 ms 00:30:55.056 [2024-12-06 22:21:27.790552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.790677] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:55.056 [2024-12-06 22:21:27.790771] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:55.056 [2024-12-06 22:21:27.790858] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:55.056 [2024-12-06 22:21:27.790945] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:55.056 [2024-12-06 22:21:27.790954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.790962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:55.056 [2024-12-06 22:21:27.790971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:30:55.056 [2024-12-06 22:21:27.790979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.791030] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:55.056 [2024-12-06 22:21:27.791046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.791056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:55.056 [2024-12-06 22:21:27.791064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:55.056 [2024-12-06 22:21:27.791072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.806523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.806560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:55.056 [2024-12-06 22:21:27.806570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.431 ms 00:30:55.056 [2024-12-06 22:21:27.806578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.815030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.815059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:55.056 [2024-12-06 22:21:27.815069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:55.056 [2024-12-06 22:21:27.815076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.056 [2024-12-06 22:21:27.815168] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:55.056 [2024-12-06 22:21:27.815309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.056 [2024-12-06 22:21:27.815320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:55.056 [2024-12-06 22:21:27.815328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.142 ms 00:30:55.056 [2024-12-06 22:21:27.815335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.622 [2024-12-06 22:21:28.444856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.622 [2024-12-06 22:21:28.444925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:55.622 [2024-12-06 22:21:28.444940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 628.699 ms 00:30:55.622 [2024-12-06 22:21:28.444949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.622 [2024-12-06 22:21:28.449614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.622 [2024-12-06 22:21:28.449649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:55.622 [2024-12-06 22:21:28.449659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.669 ms 00:30:55.622 [2024-12-06 22:21:28.449667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.622 [2024-12-06 22:21:28.450231] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:55.622 [2024-12-06 22:21:28.450263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.622 [2024-12-06 22:21:28.450271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:55.622 [2024-12-06 22:21:28.450279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:30:55.622 [2024-12-06 22:21:28.450287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.622 [2024-12-06 22:21:28.450328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.622 [2024-12-06 22:21:28.450339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:55.622 [2024-12-06 22:21:28.450347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:55.622 [2024-12-06 22:21:28.450358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.622 [2024-12-06 22:21:28.450391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 635.221 ms, result 0 00:30:55.622 [2024-12-06 22:21:28.450427] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:55.622 [2024-12-06 22:21:28.450500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.622 [2024-12-06 22:21:28.450510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:55.622 [2024-12-06 22:21:28.450517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:30:55.622 [2024-12-06 22:21:28.450525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.565 [2024-12-06 22:21:29.093462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.565 [2024-12-06 22:21:29.093550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:56.565 [2024-12-06 22:21:29.093579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 642.028 ms 00:30:56.565 [2024-12-06 22:21:29.093588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.565 [2024-12-06 22:21:29.097989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.565 [2024-12-06 22:21:29.098036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:56.565 [2024-12-06 22:21:29.098048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.273 ms 00:30:56.565 [2024-12-06 22:21:29.098055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.565 [2024-12-06 22:21:29.098536] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:56.565 [2024-12-06 22:21:29.098580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.098589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:56.566 [2024-12-06 22:21:29.098600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:30:56.566 [2024-12-06 22:21:29.098607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.098657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.098669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:56.566 [2024-12-06 22:21:29.098677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:56.566 [2024-12-06 22:21:29.098685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.098724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 648.288 ms, result 0 00:30:56.566 [2024-12-06 22:21:29.098775] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:56.566 [2024-12-06 22:21:29.098797] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:56.566 [2024-12-06 22:21:29.098808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.098818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:56.566 [2024-12-06 22:21:29.098827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1283.655 ms 00:30:56.566 [2024-12-06 22:21:29.098835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.098866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.098881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:56.566 [2024-12-06 22:21:29.098890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:56.566 [2024-12-06 22:21:29.098897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.111492] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:56.566 [2024-12-06 22:21:29.111618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.111636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:56.566 [2024-12-06 22:21:29.111647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.705 ms 00:30:56.566 [2024-12-06 22:21:29.111655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.112393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.112423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:56.566 [2024-12-06 22:21:29.112437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:30:56.566 [2024-12-06 22:21:29.112446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.114664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.114691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:56.566 [2024-12-06 22:21:29.114702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.200 ms 00:30:56.566 [2024-12-06 22:21:29.114712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.114753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.114763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:56.566 [2024-12-06 22:21:29.114772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:56.566 [2024-12-06 22:21:29.114784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.114892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.114909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:56.566 [2024-12-06 22:21:29.114918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:56.566 [2024-12-06 22:21:29.114926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.114948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.114957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:56.566 [2024-12-06 22:21:29.114966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:56.566 [2024-12-06 22:21:29.114974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.115008] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:56.566 [2024-12-06 22:21:29.115018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.115026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:56.566 [2024-12-06 22:21:29.115034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:56.566 [2024-12-06 22:21:29.115041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.115093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.566 [2024-12-06 22:21:29.115108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:56.566 [2024-12-06 22:21:29.115116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:56.566 [2024-12-06 22:21:29.115124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.566 [2024-12-06 22:21:29.116250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1501.917 ms, result 0 00:30:56.566 [2024-12-06 22:21:29.131969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.566 [2024-12-06 22:21:29.147963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:56.566 [2024-12-06 22:21:29.156333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.566 Validate MD5 checksum, iteration 1 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:56.566 22:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:56.566 [2024-12-06 22:21:29.276311] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:30:56.566 [2024-12-06 22:21:29.276437] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83254 ] 00:30:56.827 [2024-12-06 22:21:29.435565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.827 [2024-12-06 22:21:29.531201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.205  [2024-12-06T22:21:31.647Z] Copying: 713/1024 [MB] (713 MBps) [2024-12-06T22:21:34.231Z] Copying: 1024/1024 [MB] (average 715 MBps) 00:31:01.359 00:31:01.359 22:21:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:01.359 22:21:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:03.275 Validate MD5 checksum, iteration 2 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=67841fe7d1ff444b9d860b7153493313 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 67841fe7d1ff444b9d860b7153493313 != \6\7\8\4\1\f\e\7\d\1\f\f\4\4\4\b\9\d\8\6\0\b\7\1\5\3\4\9\3\3\1\3 ]] 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:03.275 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:03.276 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:03.276 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:03.276 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:03.276 22:21:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:03.276 [2024-12-06 22:21:35.730227] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:31:03.276 [2024-12-06 22:21:35.730337] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83322 ] 00:31:03.276 [2024-12-06 22:21:35.890379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.276 [2024-12-06 22:21:35.984629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.661  [2024-12-06T22:21:38.474Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-06T22:21:41.022Z] Copying: 1024/1024 [MB] (average 614 MBps) 00:31:08.150 00:31:08.150 22:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:08.150 22:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=89d1936ff803f5e16edeacae66c843a3 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 89d1936ff803f5e16edeacae66c843a3 != \8\9\d\1\9\3\6\f\f\8\0\3\f\5\e\1\6\e\d\e\a\c\a\e\6\6\c\8\4\3\a\3 ]] 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:10.698 22:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83223 ]] 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83223 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83223 ']' 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83223 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83223 00:31:10.698 killing process with pid 83223 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83223' 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83223 00:31:10.698 22:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83223 00:31:10.960 [2024-12-06 22:21:43.661112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:10.960 [2024-12-06 22:21:43.673466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.673505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:10.960 [2024-12-06 22:21:43.673516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:10.960 [2024-12-06 22:21:43.673522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.673540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:10.960 [2024-12-06 22:21:43.675628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.675659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:10.960 [2024-12-06 22:21:43.675667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.078 ms 00:31:10.960 [2024-12-06 22:21:43.675674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.675855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.675870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:10.960 [2024-12-06 22:21:43.675877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.155 ms 00:31:10.960 [2024-12-06 22:21:43.675883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.676835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.676860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:10.960 [2024-12-06 22:21:43.676867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.941 ms 00:31:10.960 [2024-12-06 22:21:43.676878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.677779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.677799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:10.960 [2024-12-06 22:21:43.677807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.880 ms 00:31:10.960 [2024-12-06 22:21:43.677813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.685443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.685472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:10.960 [2024-12-06 22:21:43.685484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.597 ms 00:31:10.960 [2024-12-06 22:21:43.685490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.689570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.689597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:10.960 [2024-12-06 22:21:43.689606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.052 ms 00:31:10.960 [2024-12-06 22:21:43.689613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.689660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.689667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:10.960 [2024-12-06 22:21:43.689674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:10.960 [2024-12-06 22:21:43.689683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.697049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.697076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:10.960 [2024-12-06 22:21:43.697083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.353 ms 00:31:10.960 [2024-12-06 22:21:43.697089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.704089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.704115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:10.960 [2024-12-06 22:21:43.704122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.947 ms 00:31:10.960 [2024-12-06 22:21:43.704128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.710996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.711023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:10.960 [2024-12-06 22:21:43.711030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.845 ms 00:31:10.960 [2024-12-06 22:21:43.711036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.717966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.960 [2024-12-06 22:21:43.717992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:10.960 [2024-12-06 22:21:43.717998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.889 ms 00:31:10.960 [2024-12-06 22:21:43.718004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.960 [2024-12-06 22:21:43.718028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:10.960 [2024-12-06 22:21:43.718041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:10.960 [2024-12-06 22:21:43.718049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:10.960 [2024-12-06 22:21:43.718055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:10.960 [2024-12-06 22:21:43.718062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.960 [2024-12-06 22:21:43.718152] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:10.960 [2024-12-06 22:21:43.718158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ed0952f-2782-4d86-97d0-8916a1d88592 00:31:10.960 [2024-12-06 22:21:43.718164] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:10.960 [2024-12-06 22:21:43.718170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:10.960 [2024-12-06 22:21:43.718185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:10.960 [2024-12-06 22:21:43.718191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:10.960 [2024-12-06 22:21:43.718196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:10.960 [2024-12-06 22:21:43.718202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:10.961 [2024-12-06 22:21:43.718211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:10.961 [2024-12-06 22:21:43.718216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:10.961 [2024-12-06 22:21:43.718222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:10.961 [2024-12-06 22:21:43.718227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.961 [2024-12-06 22:21:43.718233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:10.961 [2024-12-06 22:21:43.718241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:31:10.961 [2024-12-06 22:21:43.718246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.727900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.961 [2024-12-06 22:21:43.727925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:10.961 [2024-12-06 22:21:43.727932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.641 ms 00:31:10.961 [2024-12-06 22:21:43.727938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.728216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.961 [2024-12-06 22:21:43.728232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:10.961 [2024-12-06 22:21:43.728238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:31:10.961 [2024-12-06 22:21:43.728244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.761163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.961 [2024-12-06 22:21:43.761199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:10.961 [2024-12-06 22:21:43.761207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.961 [2024-12-06 22:21:43.761217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.761240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.961 [2024-12-06 22:21:43.761246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:10.961 [2024-12-06 22:21:43.761253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.961 [2024-12-06 22:21:43.761258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.761317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.961 [2024-12-06 22:21:43.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:10.961 [2024-12-06 22:21:43.761331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.961 [2024-12-06 22:21:43.761337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.761352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.961 [2024-12-06 22:21:43.761359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:10.961 [2024-12-06 22:21:43.761364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.961 [2024-12-06 22:21:43.761370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.961 [2024-12-06 22:21:43.820887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.961 [2024-12-06 22:21:43.820922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:10.961 [2024-12-06 22:21:43.820929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.961 [2024-12-06 22:21:43.820936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.869840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.869875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:11.222 [2024-12-06 22:21:43.869884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.869890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.869943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.869952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:11.222 [2024-12-06 22:21:43.869958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.869964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.870020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:11.222 [2024-12-06 22:21:43.870026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.870032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.870108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:11.222 [2024-12-06 22:21:43.870114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.870120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.870150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:11.222 [2024-12-06 22:21:43.870158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.870163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.870208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:11.222 [2024-12-06 22:21:43.870214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.870220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:11.222 [2024-12-06 22:21:43.870262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:11.222 [2024-12-06 22:21:43.870268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:11.222 [2024-12-06 22:21:43.870273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.222 [2024-12-06 22:21:43.870363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.874 ms, result 0 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:11.796 Remove shared memory files 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82979 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:11.796 00:31:11.796 real 1m21.762s 00:31:11.796 user 1m53.542s 00:31:11.796 sys 0m17.345s 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.796 ************************************ 00:31:11.796 END TEST ftl_upgrade_shutdown 00:31:11.796 ************************************ 00:31:11.796 22:21:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@14 -- # killprocess 75102 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 75102 ']' 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@958 -- # kill -0 75102 00:31:11.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75102) - No such process 00:31:11.796 Process with pid 75102 is not found 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75102 is not found' 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83447 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83447 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 83447 ']' 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.796 22:21:44 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.796 22:21:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:11.796 [2024-12-06 22:21:44.619976] Starting SPDK v25.01-pre git sha1 0f59982b6 / DPDK 24.03.0 initialization... 00:31:11.796 [2024-12-06 22:21:44.620072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83447 ] 00:31:12.056 [2024-12-06 22:21:44.769168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.056 [2024-12-06 22:21:44.850086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.683 22:21:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.683 22:21:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:12.683 22:21:45 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:12.953 nvme0n1 00:31:12.953 22:21:45 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:12.953 22:21:45 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:12.953 22:21:45 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:13.214 22:21:45 ftl -- ftl/common.sh@28 -- # stores=1b47043f-c8a6-4ecf-b754-e56c81d4bfe8 00:31:13.214 22:21:45 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:13.214 22:21:45 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b47043f-c8a6-4ecf-b754-e56c81d4bfe8 00:31:13.474 22:21:46 ftl -- ftl/ftl.sh@23 -- # killprocess 83447 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 83447 ']' 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@958 -- # kill -0 83447 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@959 -- # uname 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83447 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.474 killing process with pid 83447 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83447' 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@973 -- # kill 83447 00:31:13.474 22:21:46 ftl -- common/autotest_common.sh@978 -- # wait 83447 00:31:14.861 22:21:47 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:14.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:14.861 Waiting for block devices as requested 00:31:14.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:14.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:15.121 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:15.121 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:20.409 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:20.409 22:21:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:20.409 22:21:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:20.409 Remove shared memory files 00:31:20.409 22:21:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:20.409 22:21:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:20.409 22:21:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:20.409 22:21:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:20.409 22:21:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:20.409 00:31:20.409 real 12m13.471s 00:31:20.409 user 14m22.881s 00:31:20.409 sys 1m17.536s 00:31:20.409 22:21:52 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.409 22:21:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:20.409 ************************************ 00:31:20.409 END TEST ftl 00:31:20.409 ************************************ 00:31:20.409 22:21:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:20.409 22:21:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:20.409 22:21:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:20.409 22:21:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:20.409 22:21:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:20.409 22:21:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:20.409 22:21:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:20.409 22:21:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:20.409 22:21:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:20.409 22:21:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:20.409 22:21:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.409 22:21:53 -- common/autotest_common.sh@10 -- # set +x 00:31:20.409 22:21:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:20.409 22:21:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:20.409 22:21:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:20.409 22:21:53 -- common/autotest_common.sh@10 -- # set +x 00:31:22.323 INFO: APP EXITING 00:31:22.323 INFO: killing all VMs 00:31:22.323 INFO: killing vhost app 00:31:22.323 INFO: EXIT DONE 00:31:22.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:22.584 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:22.584 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:22.844 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:22.844 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:23.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:23.377 Cleaning 00:31:23.377 Removing: /var/run/dpdk/spdk0/config 00:31:23.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:23.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:23.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:23.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:23.377 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:23.377 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:23.377 Removing: /var/run/dpdk/spdk0 00:31:23.377 Removing: /var/run/dpdk/spdk_pid56919 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57127 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57345 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57438 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57472 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57594 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57611 00:31:23.377 Removing: /var/run/dpdk/spdk_pid57800 00:31:23.638 Removing: /var/run/dpdk/spdk_pid57893 00:31:23.638 Removing: /var/run/dpdk/spdk_pid57984 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58095 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58192 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58226 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58268 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58337 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58439 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58875 00:31:23.638 Removing: /var/run/dpdk/spdk_pid58939 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59002 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59018 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59120 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59136 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59233 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59249 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59307 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59325 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59378 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59396 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59556 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59593 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59682 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59854 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59938 00:31:23.638 Removing: /var/run/dpdk/spdk_pid59974 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60412 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60499 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60610 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60676 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60707 00:31:23.638 Removing: /var/run/dpdk/spdk_pid60791 00:31:23.638 Removing: /var/run/dpdk/spdk_pid61407 00:31:23.638 Removing: /var/run/dpdk/spdk_pid61449 00:31:23.638 Removing: /var/run/dpdk/spdk_pid61924 00:31:23.638 Removing: /var/run/dpdk/spdk_pid62022 00:31:23.638 Removing: /var/run/dpdk/spdk_pid62137 00:31:23.638 Removing: /var/run/dpdk/spdk_pid62190 00:31:23.638 Removing: /var/run/dpdk/spdk_pid62210 00:31:23.638 Removing: /var/run/dpdk/spdk_pid62241 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64078 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64215 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64219 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64231 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64272 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64276 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64288 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64333 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64337 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64349 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64395 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64399 00:31:23.638 Removing: /var/run/dpdk/spdk_pid64411 00:31:23.638 Removing: /var/run/dpdk/spdk_pid65808 00:31:23.638 Removing: /var/run/dpdk/spdk_pid65910 00:31:23.638 Removing: /var/run/dpdk/spdk_pid67315 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69054 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69121 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69199 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69303 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69400 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69496 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69570 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69645 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69755 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69851 00:31:23.638 Removing: /var/run/dpdk/spdk_pid69949 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70023 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70098 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70203 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70295 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70391 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70465 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70546 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70650 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70752 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70848 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70922 00:31:23.638 Removing: /var/run/dpdk/spdk_pid70996 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71067 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71147 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71254 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71345 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71434 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71514 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71588 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71662 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71737 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71835 00:31:23.638 Removing: /var/run/dpdk/spdk_pid71930 00:31:23.638 Removing: /var/run/dpdk/spdk_pid72075 00:31:23.638 Removing: /var/run/dpdk/spdk_pid72359 00:31:23.638 Removing: /var/run/dpdk/spdk_pid72396 00:31:23.638 Removing: /var/run/dpdk/spdk_pid72860 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73039 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73140 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73255 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73297 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73328 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73629 00:31:23.638 Removing: /var/run/dpdk/spdk_pid73686 00:31:23.943 Removing: /var/run/dpdk/spdk_pid73764 00:31:23.943 Removing: /var/run/dpdk/spdk_pid74151 00:31:23.943 Removing: /var/run/dpdk/spdk_pid74297 00:31:23.943 Removing: /var/run/dpdk/spdk_pid75102 00:31:23.943 Removing: /var/run/dpdk/spdk_pid75234 00:31:23.943 Removing: /var/run/dpdk/spdk_pid75404 00:31:23.943 Removing: /var/run/dpdk/spdk_pid75496 00:31:23.943 Removing: /var/run/dpdk/spdk_pid75795 00:31:23.943 Removing: /var/run/dpdk/spdk_pid76087 00:31:23.943 Removing: /var/run/dpdk/spdk_pid76440 00:31:23.943 Removing: /var/run/dpdk/spdk_pid76629 00:31:23.943 Removing: /var/run/dpdk/spdk_pid76760 00:31:23.943 Removing: /var/run/dpdk/spdk_pid76807 00:31:23.943 Removing: /var/run/dpdk/spdk_pid77006 00:31:23.943 Removing: /var/run/dpdk/spdk_pid77035 00:31:23.943 Removing: /var/run/dpdk/spdk_pid77089 00:31:23.943 Removing: /var/run/dpdk/spdk_pid77355 00:31:23.943 Removing: /var/run/dpdk/spdk_pid77580 00:31:23.943 Removing: /var/run/dpdk/spdk_pid78175 00:31:23.943 Removing: /var/run/dpdk/spdk_pid78922 00:31:23.943 Removing: /var/run/dpdk/spdk_pid79498 00:31:23.943 Removing: /var/run/dpdk/spdk_pid80235 00:31:23.943 Removing: /var/run/dpdk/spdk_pid80372 00:31:23.943 Removing: /var/run/dpdk/spdk_pid80458 00:31:23.943 Removing: /var/run/dpdk/spdk_pid80839 00:31:23.943 Removing: /var/run/dpdk/spdk_pid80898 00:31:23.943 Removing: /var/run/dpdk/spdk_pid81627 00:31:23.943 Removing: /var/run/dpdk/spdk_pid81921 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82479 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82590 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82632 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82690 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82747 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82800 00:31:23.943 Removing: /var/run/dpdk/spdk_pid82979 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83048 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83115 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83223 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83254 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83322 00:31:23.943 Removing: /var/run/dpdk/spdk_pid83447 00:31:23.943 Clean 00:31:23.943 22:21:56 -- common/autotest_common.sh@1453 -- # return 0 00:31:23.943 22:21:56 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:23.943 22:21:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.943 22:21:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.943 22:21:56 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:23.943 22:21:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.943 22:21:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.943 22:21:56 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:23.943 22:21:56 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:23.943 22:21:56 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:23.943 22:21:56 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:23.943 22:21:56 -- spdk/autotest.sh@398 -- # hostname 00:31:23.943 22:21:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:24.227 geninfo: WARNING: invalid characters removed from testname! 00:31:50.821 22:22:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:53.367 22:22:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.280 22:22:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:57.827 22:22:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:59.217 22:22:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:02.518 22:22:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:03.899 22:22:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:03.899 22:22:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:03.899 22:22:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:03.899 22:22:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:03.899 22:22:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:03.899 22:22:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:03.899 + [[ -n 5029 ]] 00:32:03.899 + sudo kill 5029 00:32:03.910 [Pipeline] } 00:32:03.926 [Pipeline] // timeout 00:32:03.932 [Pipeline] } 00:32:03.947 [Pipeline] // stage 00:32:03.952 [Pipeline] } 00:32:03.966 [Pipeline] // catchError 00:32:03.976 [Pipeline] stage 00:32:03.978 [Pipeline] { (Stop VM) 00:32:03.991 [Pipeline] sh 00:32:04.275 + vagrant halt 00:32:06.819 ==> default: Halting domain... 00:32:11.029 [Pipeline] sh 00:32:11.314 + vagrant destroy -f 00:32:14.615 ==> default: Removing domain... 00:32:14.889 [Pipeline] sh 00:32:15.177 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:32:15.185 [Pipeline] } 00:32:15.200 [Pipeline] // stage 00:32:15.204 [Pipeline] } 00:32:15.218 [Pipeline] // dir 00:32:15.222 [Pipeline] } 00:32:15.236 [Pipeline] // wrap 00:32:15.244 [Pipeline] } 00:32:15.257 [Pipeline] // catchError 00:32:15.265 [Pipeline] stage 00:32:15.267 [Pipeline] { (Epilogue) 00:32:15.279 [Pipeline] sh 00:32:15.565 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:20.864 [Pipeline] catchError 00:32:20.866 [Pipeline] { 00:32:20.879 [Pipeline] sh 00:32:21.197 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:21.197 Artifacts sizes are good 00:32:21.208 [Pipeline] } 00:32:21.222 [Pipeline] // catchError 00:32:21.237 [Pipeline] archiveArtifacts 00:32:21.247 Archiving artifacts 00:32:21.370 [Pipeline] cleanWs 00:32:21.384 [WS-CLEANUP] Deleting project workspace... 00:32:21.384 [WS-CLEANUP] Deferred wipeout is used... 00:32:21.391 [WS-CLEANUP] done 00:32:21.393 [Pipeline] } 00:32:21.409 [Pipeline] // stage 00:32:21.414 [Pipeline] } 00:32:21.428 [Pipeline] // node 00:32:21.434 [Pipeline] End of Pipeline 00:32:21.470 Finished: SUCCESS